I0523 23:37:57.537370 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0523 23:37:57.537567 7 e2e.go:129] Starting e2e run "790663ad-edda-4dae-8310-5bd91a68e7e9" on Ginkgo node 1 {"msg":"Test Suite starting","total":288,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590277076 - Will randomize all specs Will run 288 of 5095 specs May 23 23:37:57.590: INFO: >>> kubeConfig: /root/.kube/config May 23 23:37:57.593: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 23 23:37:57.612: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 23 23:37:57.638: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 23 23:37:57.638: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 23 23:37:57.638: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 23 23:37:57.644: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 23 23:37:57.644: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 23 23:37:57.644: INFO: e2e test version: v1.19.0-alpha.3.35+3416442e4b7eeb May 23 23:37:57.644: INFO: kube-apiserver version: v1.18.2 May 23 23:37:57.644: INFO: >>> kubeConfig: /root/.kube/config May 23 23:37:57.648: INFO: Cluster IP family: ipv4 SSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:37:57.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api May 23 23:37:57.725: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 23 23:37:57.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9fc2e7c8-563e-4b5d-ad96-c3df98c2279b" in namespace "downward-api-5326" to be "Succeeded or Failed" May 23 23:37:57.755: INFO: Pod "downwardapi-volume-9fc2e7c8-563e-4b5d-ad96-c3df98c2279b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.022558ms May 23 23:37:59.962: INFO: Pod "downwardapi-volume-9fc2e7c8-563e-4b5d-ad96-c3df98c2279b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.226633456s May 23 23:38:01.982: INFO: Pod "downwardapi-volume-9fc2e7c8-563e-4b5d-ad96-c3df98c2279b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.246997864s STEP: Saw pod success May 23 23:38:01.982: INFO: Pod "downwardapi-volume-9fc2e7c8-563e-4b5d-ad96-c3df98c2279b" satisfied condition "Succeeded or Failed" May 23 23:38:01.985: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9fc2e7c8-563e-4b5d-ad96-c3df98c2279b container client-container: STEP: delete the pod May 23 23:38:02.105: INFO: Waiting for pod downwardapi-volume-9fc2e7c8-563e-4b5d-ad96-c3df98c2279b to disappear May 23 23:38:02.118: INFO: Pod downwardapi-volume-9fc2e7c8-563e-4b5d-ad96-c3df98c2279b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:38:02.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5326" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:38:02.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-95a23b40-21ab-4df0-b5d4-493a94303ee8 STEP: Creating a pod to test consume configMaps May 23 23:38:02.491: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8fe080ea-d168-40fc-bcc0-7d6f30a19b75" in namespace "projected-6080" to be "Succeeded or Failed" May 23 23:38:02.518: INFO: Pod "pod-projected-configmaps-8fe080ea-d168-40fc-bcc0-7d6f30a19b75": Phase="Pending", Reason="", readiness=false. Elapsed: 26.415997ms May 23 23:38:04.523: INFO: Pod "pod-projected-configmaps-8fe080ea-d168-40fc-bcc0-7d6f30a19b75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031386943s May 23 23:38:06.555: INFO: Pod "pod-projected-configmaps-8fe080ea-d168-40fc-bcc0-7d6f30a19b75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063957579s STEP: Saw pod success May 23 23:38:06.555: INFO: Pod "pod-projected-configmaps-8fe080ea-d168-40fc-bcc0-7d6f30a19b75" satisfied condition "Succeeded or Failed" May 23 23:38:06.558: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-8fe080ea-d168-40fc-bcc0-7d6f30a19b75 container projected-configmap-volume-test: STEP: delete the pod May 23 23:38:06.610: INFO: Waiting for pod pod-projected-configmaps-8fe080ea-d168-40fc-bcc0-7d6f30a19b75 to disappear May 23 23:38:06.630: INFO: Pod pod-projected-configmaps-8fe080ea-d168-40fc-bcc0-7d6f30a19b75 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:38:06.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6080" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":2,"skipped":80,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:38:06.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 23 23:38:06.713: INFO: Waiting up to 5m0s for pod "downwardapi-volume-126fe4ef-b682-453e-8786-03907da960e5" in namespace "projected-3800" to be "Succeeded or Failed" May 23 23:38:06.757: INFO: Pod "downwardapi-volume-126fe4ef-b682-453e-8786-03907da960e5": Phase="Pending", Reason="", readiness=false. Elapsed: 44.76159ms May 23 23:38:08.903: INFO: Pod "downwardapi-volume-126fe4ef-b682-453e-8786-03907da960e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190402385s May 23 23:38:10.907: INFO: Pod "downwardapi-volume-126fe4ef-b682-453e-8786-03907da960e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.194739568s STEP: Saw pod success May 23 23:38:10.907: INFO: Pod "downwardapi-volume-126fe4ef-b682-453e-8786-03907da960e5" satisfied condition "Succeeded or Failed" May 23 23:38:10.910: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-126fe4ef-b682-453e-8786-03907da960e5 container client-container: STEP: delete the pod May 23 23:38:10.948: INFO: Waiting for pod downwardapi-volume-126fe4ef-b682-453e-8786-03907da960e5 to disappear May 23 23:38:10.959: INFO: Pod downwardapi-volume-126fe4ef-b682-453e-8786-03907da960e5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:38:10.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3800" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":3,"skipped":138,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:38:10.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 23 23:38:11.052: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d64313b1-9c03-440e-a889-f3c3f4abe4f3" in namespace "projected-9876" to be "Succeeded or Failed" May 23 23:38:11.077: INFO: Pod "downwardapi-volume-d64313b1-9c03-440e-a889-f3c3f4abe4f3": Phase="Pending", Reason="", readiness=false. Elapsed: 24.767927ms May 23 23:38:13.082: INFO: Pod "downwardapi-volume-d64313b1-9c03-440e-a889-f3c3f4abe4f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029553772s May 23 23:38:15.087: INFO: Pod "downwardapi-volume-d64313b1-9c03-440e-a889-f3c3f4abe4f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034497691s STEP: Saw pod success May 23 23:38:15.087: INFO: Pod "downwardapi-volume-d64313b1-9c03-440e-a889-f3c3f4abe4f3" satisfied condition "Succeeded or Failed" May 23 23:38:15.090: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d64313b1-9c03-440e-a889-f3c3f4abe4f3 container client-container: STEP: delete the pod May 23 23:38:15.107: INFO: Waiting for pod downwardapi-volume-d64313b1-9c03-440e-a889-f3c3f4abe4f3 to disappear May 23 23:38:15.196: INFO: Pod downwardapi-volume-d64313b1-9c03-440e-a889-f3c3f4abe4f3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:38:15.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9876" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":4,"skipped":166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:38:15.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 23 23:38:15.437: INFO: Waiting up to 5m0s for pod "pod-f60343b3-c6be-42f3-a8c7-b0beba466bcd" in namespace "emptydir-8169" to be "Succeeded or Failed" May 23 23:38:15.454: INFO: Pod "pod-f60343b3-c6be-42f3-a8c7-b0beba466bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 16.110056ms May 23 23:38:17.458: INFO: Pod "pod-f60343b3-c6be-42f3-a8c7-b0beba466bcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021000948s May 23 23:38:19.462: INFO: Pod "pod-f60343b3-c6be-42f3-a8c7-b0beba466bcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024923497s STEP: Saw pod success May 23 23:38:19.462: INFO: Pod "pod-f60343b3-c6be-42f3-a8c7-b0beba466bcd" satisfied condition "Succeeded or Failed" May 23 23:38:19.465: INFO: Trying to get logs from node latest-worker2 pod pod-f60343b3-c6be-42f3-a8c7-b0beba466bcd container test-container: STEP: delete the pod May 23 23:38:19.506: INFO: Waiting for pod pod-f60343b3-c6be-42f3-a8c7-b0beba466bcd to disappear May 23 23:38:19.511: INFO: Pod pod-f60343b3-c6be-42f3-a8c7-b0beba466bcd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:38:19.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8169" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":5,"skipped":191,"failed":0} ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:38:19.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 23 23:38:19.590: INFO: Waiting up to 5m0s for pod "downward-api-ed1e71c7-8951-4a69-9c96-951f6ef9de3b" in namespace "downward-api-8733" to be "Succeeded or Failed" May 23 23:38:19.596: INFO: Pod "downward-api-ed1e71c7-8951-4a69-9c96-951f6ef9de3b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093597ms May 23 23:38:21.600: INFO: Pod "downward-api-ed1e71c7-8951-4a69-9c96-951f6ef9de3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010030251s May 23 23:38:23.603: INFO: Pod "downward-api-ed1e71c7-8951-4a69-9c96-951f6ef9de3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013308845s STEP: Saw pod success May 23 23:38:23.603: INFO: Pod "downward-api-ed1e71c7-8951-4a69-9c96-951f6ef9de3b" satisfied condition "Succeeded or Failed" May 23 23:38:23.606: INFO: Trying to get logs from node latest-worker pod downward-api-ed1e71c7-8951-4a69-9c96-951f6ef9de3b container dapi-container: STEP: delete the pod May 23 23:38:23.626: INFO: Waiting for pod downward-api-ed1e71c7-8951-4a69-9c96-951f6ef9de3b to disappear May 23 23:38:23.631: INFO: Pod downward-api-ed1e71c7-8951-4a69-9c96-951f6ef9de3b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:38:23.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8733" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":288,"completed":6,"skipped":191,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:38:23.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-7726dc85-b569-42a7-9e5e-53ad1d5e4739 STEP: Creating configMap with name cm-test-opt-upd-3e1e17cf-9a2d-4a3e-a842-74c6755f42e4 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7726dc85-b569-42a7-9e5e-53ad1d5e4739 STEP: Updating configmap cm-test-opt-upd-3e1e17cf-9a2d-4a3e-a842-74c6755f42e4 STEP: Creating configMap with name cm-test-opt-create-61f0ae37-4384-4040-9080-00534ac364ea STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:38:31.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3884" for this suite. • [SLOW TEST:8.250 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":7,"skipped":225,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:38:31.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:38:45.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6452" for this suite. • [SLOW TEST:13.419 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":288,"completed":8,"skipped":225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:38:45.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:38:45.360: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:38:49.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8118" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":288,"completed":9,"skipped":255,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:38:49.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-0c2a0d78-19ba-4427-9cad-ea10323d0df3 STEP: Creating a pod to test consume secrets May 23 23:38:49.522: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-51c29a8d-aba2-44e0-bb8e-a7a5a9b7efca" in namespace "projected-3282" to be "Succeeded or Failed" May 23 23:38:49.549: INFO: Pod "pod-projected-secrets-51c29a8d-aba2-44e0-bb8e-a7a5a9b7efca": Phase="Pending", Reason="", readiness=false. Elapsed: 27.25448ms May 23 23:38:51.552: INFO: Pod "pod-projected-secrets-51c29a8d-aba2-44e0-bb8e-a7a5a9b7efca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03083484s May 23 23:38:53.592: INFO: Pod "pod-projected-secrets-51c29a8d-aba2-44e0-bb8e-a7a5a9b7efca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070300612s STEP: Saw pod success May 23 23:38:53.592: INFO: Pod "pod-projected-secrets-51c29a8d-aba2-44e0-bb8e-a7a5a9b7efca" satisfied condition "Succeeded or Failed" May 23 23:38:53.595: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-51c29a8d-aba2-44e0-bb8e-a7a5a9b7efca container projected-secret-volume-test: STEP: delete the pod May 23 23:38:53.633: INFO: Waiting for pod pod-projected-secrets-51c29a8d-aba2-44e0-bb8e-a7a5a9b7efca to disappear May 23 23:38:53.638: INFO: Pod pod-projected-secrets-51c29a8d-aba2-44e0-bb8e-a7a5a9b7efca no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:38:53.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3282" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":10,"skipped":261,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:38:53.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-d5a7fce6-4d81-409a-b9f6-9a8b610528ec May 23 23:38:53.745: INFO: Pod name my-hostname-basic-d5a7fce6-4d81-409a-b9f6-9a8b610528ec: Found 0 pods out of 1 May 23 23:38:58.759: INFO: Pod name my-hostname-basic-d5a7fce6-4d81-409a-b9f6-9a8b610528ec: Found 1 pods out of 1 May 23 23:38:58.759: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d5a7fce6-4d81-409a-b9f6-9a8b610528ec" are running May 23 23:38:58.762: INFO: Pod "my-hostname-basic-d5a7fce6-4d81-409a-b9f6-9a8b610528ec-klwdf" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-23 23:38:53 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-23 23:38:57 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-23 23:38:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-23 23:38:53 +0000 UTC Reason: Message:}]) May 23 23:38:58.762: INFO: Trying to dial the pod May 23 23:39:03.773: INFO: Controller my-hostname-basic-d5a7fce6-4d81-409a-b9f6-9a8b610528ec: Got expected result from replica 1 [my-hostname-basic-d5a7fce6-4d81-409a-b9f6-9a8b610528ec-klwdf]: "my-hostname-basic-d5a7fce6-4d81-409a-b9f6-9a8b610528ec-klwdf", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:39:03.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9039" for this suite. • [SLOW TEST:10.136 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":11,"skipped":312,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:39:03.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:39:34.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6202" for this suite. • [SLOW TEST:31.056 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":288,"completed":12,"skipped":331,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:39:34.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 23 23:39:39.421: INFO: Successfully updated pod "annotationupdate9ba0e886-7c10-4439-a4cc-97e4cce0b633" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:39:43.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5352" for this suite. • [SLOW TEST:8.637 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":13,"skipped":336,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:39:43.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 23 23:39:43.542: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 23 23:39:43.580: INFO: Waiting for terminating namespaces to be deleted... May 23 23:39:43.583: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 23 23:39:43.590: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 23 23:39:43.590: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 23 23:39:43.590: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 23 23:39:43.590: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 23 23:39:43.590: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 23 23:39:43.590: INFO: Container kindnet-cni ready: true, restart count 0 May 23 23:39:43.590: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 23 23:39:43.590: INFO: Container kube-proxy ready: true, restart count 0 May 23 23:39:43.590: INFO: annotationupdate9ba0e886-7c10-4439-a4cc-97e4cce0b633 from projected-5352 started at 2020-05-23 23:39:34 +0000 UTC (1 container statuses recorded) May 23 23:39:43.590: INFO: Container client-container ready: true, restart count 0 May 23 23:39:43.590: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 23 23:39:43.596: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 23 23:39:43.596: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 23 23:39:43.596: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 23 23:39:43.596: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 23 23:39:43.596: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 23 23:39:43.596: INFO: Container kindnet-cni ready: true, restart count 0 May 23 23:39:43.596: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 23 23:39:43.596: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1611ccac40337c55], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1611ccac43442832], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:39:44.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4858" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":288,"completed":14,"skipped":354,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:39:44.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0523 23:39:54.795306 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 23 23:39:54.795: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:39:54.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7446" for this suite. • [SLOW TEST:10.175 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":288,"completed":15,"skipped":362,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:39:54.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 23 23:39:55.432: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 23 23:39:57.443: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725873995, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725873995, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725873995, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725873995, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 23 23:40:00.510: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:40:00.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4987" for this suite. STEP: Destroying namespace "webhook-4987-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.029 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":288,"completed":16,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:40:00.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode May 23 23:40:00.953: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7144" to be "Succeeded or Failed" May 23 23:40:01.137: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 183.946096ms May 23 23:40:03.149: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195383559s May 23 23:40:05.152: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199052474s May 23 23:40:07.157: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.203816219s STEP: Saw pod success May 23 23:40:07.157: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 23 23:40:07.161: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 23 23:40:07.217: INFO: Waiting for pod pod-host-path-test to disappear May 23 23:40:07.229: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:40:07.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-7144" for this suite. • [SLOW TEST:6.404 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":17,"skipped":402,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:40:07.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:40:11.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-940" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":18,"skipped":412,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:40:11.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 23 23:40:11.480: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 23 23:40:11.495: INFO: Number of nodes with available pods: 0 May 23 23:40:11.495: INFO: Node latest-worker is running more than one daemon pod May 23 23:40:12.504: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 23 23:40:12.508: INFO: Number of nodes with available pods: 0 May 23 23:40:12.508: INFO: Node latest-worker is running more than one daemon pod May 23 23:40:13.511: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 23 23:40:13.570: INFO: Number of nodes with available pods: 0 May 23 23:40:13.570: INFO: Node latest-worker is running more than one daemon pod May 23 23:40:14.528: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 23 23:40:14.532: INFO: Number of nodes with available pods: 0 May 23 23:40:14.532: INFO: Node latest-worker is running more than one daemon pod May 23 23:40:15.513: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 23 23:40:15.524: INFO: Number of nodes with available pods: 2 May 23 23:40:15.524: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 23 23:40:15.559: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 23 23:40:15.564: INFO: Number of nodes with available pods: 2 May 23 23:40:15.564: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2072, will wait for the garbage collector to delete the pods May 23 23:40:17.083: INFO: Deleting DaemonSet.extensions daemon-set took: 303.660078ms May 23 23:40:17.683: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.241891ms May 23 23:40:25.286: INFO: Number of nodes with available pods: 0 May 23 23:40:25.286: INFO: Number of running nodes: 0, number of available pods: 0 May 23 23:40:25.291: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2072/daemonsets","resourceVersion":"7141831"},"items":null} May 23 23:40:25.294: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2072/pods","resourceVersion":"7141831"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:40:25.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2072" for this suite. • [SLOW TEST:13.927 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":288,"completed":19,"skipped":415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:40:25.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-bp245 in namespace proxy-3161 I0523 23:40:25.425004 7 runners.go:190] Created replication controller with name: proxy-service-bp245, namespace: proxy-3161, replica count: 1 I0523 23:40:26.475429 7 runners.go:190] proxy-service-bp245 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0523 23:40:27.475707 7 runners.go:190] proxy-service-bp245 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0523 23:40:28.475932 7 runners.go:190] proxy-service-bp245 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0523 23:40:29.476135 7 runners.go:190] proxy-service-bp245 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0523 23:40:30.476333 7 runners.go:190] proxy-service-bp245 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0523 23:40:31.476608 7 runners.go:190] proxy-service-bp245 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0523 23:40:32.476838 7 runners.go:190] proxy-service-bp245 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0523 23:40:33.477072 7 runners.go:190] proxy-service-bp245 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0523 23:40:34.477252 7 runners.go:190] proxy-service-bp245 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0523 23:40:35.477474 7 runners.go:190] proxy-service-bp245 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0523 23:40:36.477699 7 runners.go:190] proxy-service-bp245 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0523 23:40:37.477894 7 runners.go:190] proxy-service-bp245 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 23 23:40:37.480: INFO: setup took 12.127826163s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 23 23:40:37.487: INFO: (0) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 6.062754ms) May 23 23:40:37.487: INFO: (0) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 6.849244ms) May 23 23:40:37.488: INFO: (0) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 7.297004ms) May 23 23:40:37.488: INFO: (0) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 7.491195ms) May 23 23:40:37.489: INFO: (0) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 8.278083ms) May 23 23:40:37.489: INFO: (0) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:1080/proxy/: test<... (200; 8.637326ms) May 23 23:40:37.489: INFO: (0) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 8.898628ms) May 23 23:40:37.491: INFO: (0) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:1080/proxy/: ... (200; 9.845419ms) May 23 23:40:37.492: INFO: (0) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 11.249647ms) May 23 23:40:37.492: INFO: (0) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 11.216037ms) May 23 23:40:37.492: INFO: (0) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 11.565945ms) May 23 23:40:37.523: INFO: (0) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 41.693806ms) May 23 23:40:37.523: INFO: (0) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 42.044918ms) May 23 23:40:37.523: INFO: (0) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 41.923993ms) May 23 23:40:37.523: INFO: (0) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: test<... (200; 3.151612ms) May 23 23:40:37.526: INFO: (1) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: ... (200; 14.476134ms) May 23 23:40:37.537: INFO: (1) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 14.668585ms) May 23 23:40:37.537: INFO: (1) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 14.769647ms) May 23 23:40:37.538: INFO: (1) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 15.618096ms) May 23 23:40:37.538: INFO: (1) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 15.647303ms) May 23 23:40:37.538: INFO: (1) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 15.585024ms) May 23 23:40:37.538: INFO: (1) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 15.843694ms) May 23 23:40:37.538: INFO: (1) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 15.773064ms) May 23 23:40:37.538: INFO: (1) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 15.785454ms) May 23 23:40:37.539: INFO: (1) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 15.966048ms) May 23 23:40:37.542: INFO: (2) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 3.005001ms) May 23 23:40:37.544: INFO: (2) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 5.110274ms) May 23 23:40:37.544: INFO: (2) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 5.259089ms) May 23 23:40:37.545: INFO: (2) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 6.118916ms) May 23 23:40:37.545: INFO: (2) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 6.270956ms) May 23 23:40:37.545: INFO: (2) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: test<... (200; 6.416931ms) May 23 23:40:37.545: INFO: (2) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 6.474581ms) May 23 23:40:37.545: INFO: (2) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 6.534655ms) May 23 23:40:37.545: INFO: (2) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 6.611724ms) May 23 23:40:37.545: INFO: (2) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:1080/proxy/: ... (200; 6.722755ms) May 23 23:40:37.546: INFO: (2) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 6.755822ms) May 23 23:40:37.546: INFO: (2) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 6.74523ms) May 23 23:40:37.546: INFO: (2) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 6.933882ms) May 23 23:40:37.546: INFO: (2) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 7.226508ms) May 23 23:40:37.551: INFO: (3) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 5.364946ms) May 23 23:40:37.551: INFO: (3) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:1080/proxy/: ... (200; 5.424281ms) May 23 23:40:37.551: INFO: (3) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 5.494125ms) May 23 23:40:37.551: INFO: (3) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: test (200; 5.474985ms) May 23 23:40:37.551: INFO: (3) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 5.480431ms) May 23 23:40:37.551: INFO: (3) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 5.471119ms) May 23 23:40:37.551: INFO: (3) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 5.481387ms) May 23 23:40:37.551: INFO: (3) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 5.509751ms) May 23 23:40:37.551: INFO: (3) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 5.475709ms) May 23 23:40:37.552: INFO: (3) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 5.488658ms) May 23 23:40:37.552: INFO: (3) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 5.699162ms) May 23 23:40:37.552: INFO: (3) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:1080/proxy/: test<... (200; 5.842006ms) May 23 23:40:37.552: INFO: (3) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 5.9721ms) May 23 23:40:37.552: INFO: (3) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 6.071598ms) May 23 23:40:37.556: INFO: (4) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 3.815138ms) May 23 23:40:37.556: INFO: (4) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:1080/proxy/: test<... (200; 3.830292ms) May 23 23:40:37.556: INFO: (4) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 3.898246ms) May 23 23:40:37.556: INFO: (4) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 3.863604ms) May 23 23:40:37.556: INFO: (4) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 3.968153ms) May 23 23:40:37.556: INFO: (4) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 4.176467ms) May 23 23:40:37.557: INFO: (4) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 4.418099ms) May 23 23:40:37.557: INFO: (4) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 4.385256ms) May 23 23:40:37.557: INFO: (4) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: ... (200; 4.798078ms) May 23 23:40:37.557: INFO: (4) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 4.871588ms) May 23 23:40:37.557: INFO: (4) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 4.88051ms) May 23 23:40:37.557: INFO: (4) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 4.891905ms) May 23 23:40:37.557: INFO: (4) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 4.961697ms) May 23 23:40:37.557: INFO: (4) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 5.009673ms) May 23 23:40:37.559: INFO: (5) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 1.806002ms) May 23 23:40:37.561: INFO: (5) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: test<... (200; 5.03752ms) May 23 23:40:37.562: INFO: (5) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 5.066266ms) May 23 23:40:37.563: INFO: (5) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 5.313635ms) May 23 23:40:37.563: INFO: (5) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 5.359813ms) May 23 23:40:37.563: INFO: (5) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:1080/proxy/: ... (200; 5.621086ms) May 23 23:40:37.563: INFO: (5) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 5.569854ms) May 23 23:40:37.563: INFO: (5) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 5.614391ms) May 23 23:40:37.563: INFO: (5) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 5.515411ms) May 23 23:40:37.563: INFO: (5) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 5.603018ms) May 23 23:40:37.563: INFO: (5) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 5.632704ms) May 23 23:40:37.567: INFO: (6) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 3.368243ms) May 23 23:40:37.567: INFO: (6) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 3.725598ms) May 23 23:40:37.567: INFO: (6) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: ... (200; 3.81398ms) May 23 23:40:37.567: INFO: (6) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 3.779413ms) May 23 23:40:37.567: INFO: (6) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 3.996492ms) May 23 23:40:37.567: INFO: (6) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 4.339382ms) May 23 23:40:37.567: INFO: (6) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 4.098717ms) May 23 23:40:37.567: INFO: (6) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:1080/proxy/: test<... (200; 4.023571ms) May 23 23:40:37.568: INFO: (6) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 4.935806ms) May 23 23:40:37.568: INFO: (6) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 5.316083ms) May 23 23:40:37.569: INFO: (6) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 5.239755ms) May 23 23:40:37.569: INFO: (6) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 5.735055ms) May 23 23:40:37.569: INFO: (6) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 5.527039ms) May 23 23:40:37.569: INFO: (6) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 5.74154ms) May 23 23:40:37.569: INFO: (6) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 5.680642ms) May 23 23:40:37.573: INFO: (7) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 3.570928ms) May 23 23:40:37.573: INFO: (7) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 3.609759ms) May 23 23:40:37.573: INFO: (7) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 3.561662ms) May 23 23:40:37.573: INFO: (7) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 4.120501ms) May 23 23:40:37.573: INFO: (7) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:1080/proxy/: test<... (200; 4.281951ms) May 23 23:40:37.573: INFO: (7) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 4.264213ms) May 23 23:40:37.574: INFO: (7) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 4.325092ms) May 23 23:40:37.574: INFO: (7) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: ... (200; 4.658804ms) May 23 23:40:37.574: INFO: (7) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 5.298185ms) May 23 23:40:37.575: INFO: (7) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 5.402475ms) May 23 23:40:37.575: INFO: (7) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 5.552162ms) May 23 23:40:37.575: INFO: (7) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 5.492871ms) May 23 23:40:37.575: INFO: (7) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 5.486943ms) May 23 23:40:37.575: INFO: (7) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 5.541197ms) May 23 23:40:37.575: INFO: (7) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 5.793049ms) May 23 23:40:37.579: INFO: (8) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 3.778001ms) May 23 23:40:37.581: INFO: (8) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 6.250509ms) May 23 23:40:37.581: INFO: (8) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 6.265399ms) May 23 23:40:37.581: INFO: (8) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 6.365254ms) May 23 23:40:37.581: INFO: (8) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 6.442914ms) May 23 23:40:37.581: INFO: (8) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 6.442639ms) May 23 23:40:37.581: INFO: (8) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 6.392501ms) May 23 23:40:37.581: INFO: (8) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: ... (200; 6.503162ms) May 23 23:40:37.582: INFO: (8) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 6.433673ms) May 23 23:40:37.582: INFO: (8) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 6.379521ms) May 23 23:40:37.582: INFO: (8) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:1080/proxy/: test<... (200; 6.420562ms) May 23 23:40:37.583: INFO: (8) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 7.475359ms) May 23 23:40:37.583: INFO: (8) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 7.764565ms) May 23 23:40:37.583: INFO: (8) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 7.933932ms) May 23 23:40:37.587: INFO: (9) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: test<... (200; 3.912992ms) May 23 23:40:37.587: INFO: (9) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 3.943344ms) May 23 23:40:37.587: INFO: (9) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 3.867329ms) May 23 23:40:37.587: INFO: (9) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 3.925394ms) May 23 23:40:37.587: INFO: (9) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 3.918385ms) May 23 23:40:37.587: INFO: (9) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 4.002959ms) May 23 23:40:37.587: INFO: (9) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 3.928498ms) May 23 23:40:37.587: INFO: (9) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 3.956333ms) May 23 23:40:37.587: INFO: (9) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 4.339164ms) May 23 23:40:37.587: INFO: (9) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 4.292837ms) May 23 23:40:37.587: INFO: (9) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:1080/proxy/: ... (200; 4.253488ms) May 23 23:40:37.588: INFO: (9) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 4.545077ms) May 23 23:40:37.588: INFO: (9) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 4.690084ms) May 23 23:40:37.588: INFO: (9) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 4.888265ms) May 23 23:40:37.588: INFO: (9) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 5.019988ms) May 23 23:40:37.593: INFO: (10) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 4.409104ms) May 23 23:40:37.593: INFO: (10) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 4.365953ms) May 23 23:40:37.593: INFO: (10) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 4.701122ms) May 23 23:40:37.593: INFO: (10) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 4.752411ms) May 23 23:40:37.593: INFO: (10) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:1080/proxy/: ... (200; 4.677444ms) May 23 23:40:37.593: INFO: (10) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 4.71113ms) May 23 23:40:37.593: INFO: (10) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 4.717443ms) May 23 23:40:37.593: INFO: (10) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 4.683028ms) May 23 23:40:37.593: INFO: (10) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 5.068015ms) May 23 23:40:37.593: INFO: (10) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: test<... (200; 5.241389ms) May 23 23:40:37.593: INFO: (10) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 5.243166ms) May 23 23:40:37.598: INFO: (11) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 4.045302ms) May 23 23:40:37.598: INFO: (11) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 4.431007ms) May 23 23:40:37.598: INFO: (11) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 3.539353ms) May 23 23:40:37.598: INFO: (11) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 3.531585ms) May 23 23:40:37.598: INFO: (11) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 3.784781ms) May 23 23:40:37.598: INFO: (11) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 4.301029ms) May 23 23:40:37.598: INFO: (11) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 4.404421ms) May 23 23:40:37.598: INFO: (11) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 4.270036ms) May 23 23:40:37.599: INFO: (11) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 4.098106ms) May 23 23:40:37.599: INFO: (11) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 5.041129ms) May 23 23:40:37.599: INFO: (11) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 4.990381ms) May 23 23:40:37.599: INFO: (11) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: ... (200; 5.206937ms) May 23 23:40:37.599: INFO: (11) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:1080/proxy/: test<... (200; 5.506622ms) May 23 23:40:37.599: INFO: (11) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 5.524857ms) May 23 23:40:37.602: INFO: (12) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 2.813009ms) May 23 23:40:37.602: INFO: (12) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 2.964481ms) May 23 23:40:37.602: INFO: (12) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 3.083332ms) May 23 23:40:37.603: INFO: (12) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 3.289353ms) May 23 23:40:37.603: INFO: (12) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: test<... (200; 4.606626ms) May 23 23:40:37.604: INFO: (12) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 4.662743ms) May 23 23:40:37.604: INFO: (12) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 4.695974ms) May 23 23:40:37.604: INFO: (12) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:1080/proxy/: ... (200; 4.635528ms) May 23 23:40:37.604: INFO: (12) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 4.689649ms) May 23 23:40:37.604: INFO: (12) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 4.669987ms) May 23 23:40:37.611: INFO: (13) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 6.402797ms) May 23 23:40:37.611: INFO: (13) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 7.17913ms) May 23 23:40:37.611: INFO: (13) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:1080/proxy/: ... (200; 7.292383ms) May 23 23:40:37.615: INFO: (13) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:1080/proxy/: test<... (200; 10.654535ms) May 23 23:40:37.615: INFO: (13) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 10.943651ms) May 23 23:40:37.615: INFO: (13) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 11.047638ms) May 23 23:40:37.616: INFO: (13) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 12.036662ms) May 23 23:40:37.616: INFO: (13) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 11.964816ms) May 23 23:40:37.616: INFO: (13) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 12.011962ms) May 23 23:40:37.616: INFO: (13) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 11.982675ms) May 23 23:40:37.616: INFO: (13) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 12.02112ms) May 23 23:40:37.616: INFO: (13) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 12.07737ms) May 23 23:40:37.616: INFO: (13) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 12.040575ms) May 23 23:40:37.616: INFO: (13) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 12.070488ms) May 23 23:40:37.616: INFO: (13) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 12.066286ms) May 23 23:40:37.616: INFO: (13) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: test<... (200; 4.665347ms) May 23 23:40:37.621: INFO: (14) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 4.775098ms) May 23 23:40:37.621: INFO: (14) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:1080/proxy/: ... (200; 4.847413ms) May 23 23:40:37.621: INFO: (14) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 4.845342ms) May 23 23:40:37.621: INFO: (14) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 4.877887ms) May 23 23:40:37.622: INFO: (14) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 6.016694ms) May 23 23:40:37.622: INFO: (14) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 6.030721ms) May 23 23:40:37.622: INFO: (14) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 6.013535ms) May 23 23:40:37.622: INFO: (14) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 6.076076ms) May 23 23:40:37.622: INFO: (14) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 6.055126ms) May 23 23:40:37.622: INFO: (14) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 6.108873ms) May 23 23:40:37.625: INFO: (15) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:1080/proxy/: test<... (200; 1.990276ms) May 23 23:40:37.625: INFO: (15) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 2.288318ms) May 23 23:40:37.628: INFO: (15) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 4.994121ms) May 23 23:40:37.628: INFO: (15) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 4.946017ms) May 23 23:40:37.628: INFO: (15) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 5.020899ms) May 23 23:40:37.628: INFO: (15) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 5.051834ms) May 23 23:40:37.628: INFO: (15) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 5.399844ms) May 23 23:40:37.628: INFO: (15) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 5.40696ms) May 23 23:40:37.628: INFO: (15) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: test (200; 5.600487ms) May 23 23:40:37.628: INFO: (15) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 5.735323ms) May 23 23:40:37.628: INFO: (15) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 5.795282ms) May 23 23:40:37.628: INFO: (15) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 5.856974ms) May 23 23:40:37.628: INFO: (15) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:1080/proxy/: ... (200; 5.796426ms) May 23 23:40:37.632: INFO: (16) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:1080/proxy/: test<... (200; 3.56214ms) May 23 23:40:37.632: INFO: (16) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 3.400093ms) May 23 23:40:37.633: INFO: (16) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 3.978319ms) May 23 23:40:37.633: INFO: (16) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 4.130293ms) May 23 23:40:37.633: INFO: (16) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 4.197548ms) May 23 23:40:37.633: INFO: (16) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 4.193507ms) May 23 23:40:37.633: INFO: (16) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 4.170789ms) May 23 23:40:37.633: INFO: (16) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 4.26352ms) May 23 23:40:37.633: INFO: (16) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: ... (200; 4.244227ms) May 23 23:40:37.634: INFO: (16) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 5.136732ms) May 23 23:40:37.634: INFO: (16) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 5.593911ms) May 23 23:40:37.634: INFO: (16) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 5.147811ms) May 23 23:40:37.634: INFO: (16) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 5.180116ms) May 23 23:40:37.634: INFO: (16) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 5.26466ms) May 23 23:40:37.636: INFO: (17) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:1080/proxy/: test<... (200; 2.170089ms) May 23 23:40:37.636: INFO: (17) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 2.225985ms) May 23 23:40:37.639: INFO: (17) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname1/proxy/: foo (200; 4.540516ms) May 23 23:40:37.639: INFO: (17) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 4.809203ms) May 23 23:40:37.639: INFO: (17) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname2/proxy/: tls qux (200; 4.844336ms) May 23 23:40:37.639: INFO: (17) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname2/proxy/: bar (200; 4.85994ms) May 23 23:40:37.639: INFO: (17) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 4.961612ms) May 23 23:40:37.639: INFO: (17) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 5.05097ms) May 23 23:40:37.639: INFO: (17) /api/v1/namespaces/proxy-3161/services/proxy-service-bp245:portname1/proxy/: foo (200; 5.057007ms) May 23 23:40:37.640: INFO: (17) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 5.409816ms) May 23 23:40:37.640: INFO: (17) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 5.486863ms) May 23 23:40:37.640: INFO: (17) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:1080/proxy/: ... (200; 5.435351ms) May 23 23:40:37.640: INFO: (17) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 5.458655ms) May 23 23:40:37.640: INFO: (17) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 5.437381ms) May 23 23:40:37.640: INFO: (17) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 5.390086ms) May 23 23:40:37.640: INFO: (17) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: test<... (200; 5.132863ms) May 23 23:40:37.645: INFO: (18) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:1080/proxy/: ... (200; 5.140339ms) May 23 23:40:37.645: INFO: (18) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 5.414691ms) May 23 23:40:37.645: INFO: (18) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 5.383687ms) May 23 23:40:37.647: INFO: (18) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:160/proxy/: foo (200; 6.668988ms) May 23 23:40:37.647: INFO: (18) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:162/proxy/: bar (200; 6.629302ms) May 23 23:40:37.647: INFO: (18) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 6.772439ms) May 23 23:40:37.647: INFO: (18) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 6.729297ms) May 23 23:40:37.647: INFO: (18) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:462/proxy/: tls qux (200; 6.858257ms) May 23 23:40:37.647: INFO: (18) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: test<... (200; 4.487755ms) May 23 23:40:37.653: INFO: (19) /api/v1/namespaces/proxy-3161/services/https:proxy-service-bp245:tlsportname1/proxy/: tls baz (200; 4.462679ms) May 23 23:40:37.653: INFO: (19) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj/proxy/: test (200; 4.739756ms) May 23 23:40:37.654: INFO: (19) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:460/proxy/: tls baz (200; 4.771226ms) May 23 23:40:37.654: INFO: (19) /api/v1/namespaces/proxy-3161/pods/https:proxy-service-bp245-llhlj:443/proxy/: ... (200; 5.009387ms) May 23 23:40:37.654: INFO: (19) /api/v1/namespaces/proxy-3161/pods/http:proxy-service-bp245-llhlj:160/proxy/: foo (200; 5.014015ms) May 23 23:40:37.654: INFO: (19) /api/v1/namespaces/proxy-3161/pods/proxy-service-bp245-llhlj:162/proxy/: bar (200; 4.771856ms) May 23 23:40:37.654: INFO: (19) /api/v1/namespaces/proxy-3161/services/http:proxy-service-bp245:portname2/proxy/: bar (200; 5.001365ms) STEP: deleting ReplicationController proxy-service-bp245 in namespace proxy-3161, will wait for the garbage collector to delete the pods May 23 23:40:37.711: INFO: Deleting ReplicationController proxy-service-bp245 took: 5.507833ms May 23 23:40:38.011: INFO: Terminating ReplicationController proxy-service-bp245 pods took: 300.218279ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:40:44.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3161" for this suite. • [SLOW TEST:19.664 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":288,"completed":20,"skipped":445,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:40:44.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:40:45.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8670" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":288,"completed":21,"skipped":456,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:40:45.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 23 23:40:45.287: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe96f8cc-5f73-43a7-8bcb-52b3e81c4580" in namespace "projected-5149" to be "Succeeded or Failed" May 23 23:40:45.296: INFO: Pod "downwardapi-volume-fe96f8cc-5f73-43a7-8bcb-52b3e81c4580": Phase="Pending", Reason="", readiness=false. Elapsed: 9.205114ms May 23 23:40:47.300: INFO: Pod "downwardapi-volume-fe96f8cc-5f73-43a7-8bcb-52b3e81c4580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013021162s May 23 23:40:49.304: INFO: Pod "downwardapi-volume-fe96f8cc-5f73-43a7-8bcb-52b3e81c4580": Phase="Running", Reason="", readiness=true. Elapsed: 4.01693703s May 23 23:40:51.309: INFO: Pod "downwardapi-volume-fe96f8cc-5f73-43a7-8bcb-52b3e81c4580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021880188s STEP: Saw pod success May 23 23:40:51.309: INFO: Pod "downwardapi-volume-fe96f8cc-5f73-43a7-8bcb-52b3e81c4580" satisfied condition "Succeeded or Failed" May 23 23:40:51.312: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fe96f8cc-5f73-43a7-8bcb-52b3e81c4580 container client-container: STEP: delete the pod May 23 23:40:51.333: INFO: Waiting for pod downwardapi-volume-fe96f8cc-5f73-43a7-8bcb-52b3e81c4580 to disappear May 23 23:40:51.336: INFO: Pod downwardapi-volume-fe96f8cc-5f73-43a7-8bcb-52b3e81c4580 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:40:51.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5149" for this suite. • [SLOW TEST:6.161 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":22,"skipped":468,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:40:51.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 23 23:40:51.453: INFO: Waiting up to 5m0s for pod "pod-8a92a349-ae9f-493e-855c-459ecb11b4e3" in namespace "emptydir-5278" to be "Succeeded or Failed" May 23 23:40:51.467: INFO: Pod "pod-8a92a349-ae9f-493e-855c-459ecb11b4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.899659ms May 23 23:40:53.470: INFO: Pod "pod-8a92a349-ae9f-493e-855c-459ecb11b4e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016811909s May 23 23:40:55.473: INFO: Pod "pod-8a92a349-ae9f-493e-855c-459ecb11b4e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020318237s STEP: Saw pod success May 23 23:40:55.473: INFO: Pod "pod-8a92a349-ae9f-493e-855c-459ecb11b4e3" satisfied condition "Succeeded or Failed" May 23 23:40:55.476: INFO: Trying to get logs from node latest-worker2 pod pod-8a92a349-ae9f-493e-855c-459ecb11b4e3 container test-container: STEP: delete the pod May 23 23:40:55.508: INFO: Waiting for pod pod-8a92a349-ae9f-493e-855c-459ecb11b4e3 to disappear May 23 23:40:55.532: INFO: Pod pod-8a92a349-ae9f-493e-855c-459ecb11b4e3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:40:55.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5278" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":23,"skipped":482,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:40:55.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 23 23:40:56.055: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 23 23:40:58.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874056, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874056, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874056, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874056, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 23 23:41:00.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874056, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874056, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874056, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874056, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 23 23:41:03.299: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:41:03.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4261" for this suite. STEP: Destroying namespace "webhook-4261-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.919 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":288,"completed":24,"skipped":494,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:41:03.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-c4fd0043-f3b1-4a1e-ba93-9ad04fa8e408 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-c4fd0043-f3b1-4a1e-ba93-9ad04fa8e408 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:41:11.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4931" for this suite. • [SLOW TEST:8.450 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":25,"skipped":508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:41:11.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-378de1be-7342-4c64-b542-cd0c2b8520bf STEP: Creating a pod to test consume secrets May 23 23:41:12.028: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-398661b9-edbe-4d02-9d8b-295dab16bf21" in namespace "projected-3529" to be "Succeeded or Failed" May 23 23:41:12.040: INFO: Pod "pod-projected-secrets-398661b9-edbe-4d02-9d8b-295dab16bf21": Phase="Pending", Reason="", readiness=false. Elapsed: 11.805387ms May 23 23:41:14.044: INFO: Pod "pod-projected-secrets-398661b9-edbe-4d02-9d8b-295dab16bf21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015751393s May 23 23:41:16.048: INFO: Pod "pod-projected-secrets-398661b9-edbe-4d02-9d8b-295dab16bf21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019410068s STEP: Saw pod success May 23 23:41:16.048: INFO: Pod "pod-projected-secrets-398661b9-edbe-4d02-9d8b-295dab16bf21" satisfied condition "Succeeded or Failed" May 23 23:41:16.050: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-398661b9-edbe-4d02-9d8b-295dab16bf21 container projected-secret-volume-test: STEP: delete the pod May 23 23:41:16.229: INFO: Waiting for pod pod-projected-secrets-398661b9-edbe-4d02-9d8b-295dab16bf21 to disappear May 23 23:41:16.262: INFO: Pod pod-projected-secrets-398661b9-edbe-4d02-9d8b-295dab16bf21 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:41:16.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3529" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":26,"skipped":532,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:41:16.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 23 23:41:16.442: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 23 23:41:16.500: INFO: Waiting for terminating namespaces to be deleted... May 23 23:41:16.503: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 23 23:41:16.509: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 23 23:41:16.510: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 23 23:41:16.510: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 23 23:41:16.510: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 23 23:41:16.510: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 23 23:41:16.510: INFO: Container kindnet-cni ready: true, restart count 0 May 23 23:41:16.510: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 23 23:41:16.510: INFO: Container kube-proxy ready: true, restart count 0 May 23 23:41:16.510: INFO: pod-projected-configmaps-7b2ffd53-f93c-4c75-a9e5-55cd81bfddb7 from projected-4931 started at 2020-05-23 23:41:03 +0000 UTC (1 container statuses recorded) May 23 23:41:16.510: INFO: Container projected-configmap-volume-test ready: true, restart count 0 May 23 23:41:16.510: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 23 23:41:16.515: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 23 23:41:16.515: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 23 23:41:16.515: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 23 23:41:16.515: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 23 23:41:16.515: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 23 23:41:16.515: INFO: Container kindnet-cni ready: true, restart count 0 May 23 23:41:16.515: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 23 23:41:16.515: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 May 23 23:41:16.648: INFO: Pod rally-c184502e-30nwopzm requesting resource cpu=0m on Node latest-worker May 23 23:41:16.648: INFO: Pod terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 requesting resource cpu=0m on Node latest-worker2 May 23 23:41:16.648: INFO: Pod kindnet-hg2tf requesting resource cpu=100m on Node latest-worker May 23 23:41:16.648: INFO: Pod kindnet-jl4dn requesting resource cpu=100m on Node latest-worker2 May 23 23:41:16.648: INFO: Pod kube-proxy-c8n27 requesting resource cpu=0m on Node latest-worker May 23 23:41:16.648: INFO: Pod kube-proxy-pcmmp requesting resource cpu=0m on Node latest-worker2 May 23 23:41:16.648: INFO: Pod pod-projected-configmaps-7b2ffd53-f93c-4c75-a9e5-55cd81bfddb7 requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. May 23 23:41:16.648: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker May 23 23:41:16.655: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3f214170-4d90-4794-9717-949d15bfd0fe.1611ccc1ec9a203d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8905/filler-pod-3f214170-4d90-4794-9717-949d15bfd0fe to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f214170-4d90-4794-9717-949d15bfd0fe.1611ccc25b82a598], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f214170-4d90-4794-9717-949d15bfd0fe.1611ccc2b5abb02d], Reason = [Created], Message = [Created container filler-pod-3f214170-4d90-4794-9717-949d15bfd0fe] STEP: Considering event: Type = [Normal], Name = [filler-pod-3f214170-4d90-4794-9717-949d15bfd0fe.1611ccc2c6aae4b9], Reason = [Started], Message = [Started container filler-pod-3f214170-4d90-4794-9717-949d15bfd0fe] STEP: Considering event: Type = [Normal], Name = [filler-pod-dd614a33-29b9-4182-afc1-cca92f4e58b8.1611ccc1eb0300cf], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8905/filler-pod-dd614a33-29b9-4182-afc1-cca92f4e58b8 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-dd614a33-29b9-4182-afc1-cca92f4e58b8.1611ccc247ad4e13], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-dd614a33-29b9-4182-afc1-cca92f4e58b8.1611ccc2a05d2330], Reason = [Created], Message = [Created container filler-pod-dd614a33-29b9-4182-afc1-cca92f4e58b8] STEP: Considering event: Type = [Normal], Name = [filler-pod-dd614a33-29b9-4182-afc1-cca92f4e58b8.1611ccc2b5abb063], Reason = [Started], Message = [Started container filler-pod-dd614a33-29b9-4182-afc1-cca92f4e58b8] STEP: Considering event: Type = [Warning], Name = [additional-pod.1611ccc35721436d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1611ccc358798cde], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:41:23.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8905" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.557 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":288,"completed":27,"skipped":534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:41:23.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:41:23.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3889" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":288,"completed":28,"skipped":559,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:41:23.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 23 23:41:24.061: INFO: >>> kubeConfig: /root/.kube/config May 23 23:41:27.023: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:41:39.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6186" for this suite. • [SLOW TEST:15.440 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":288,"completed":29,"skipped":559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:41:39.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 23 23:41:39.496: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 23 23:41:50.306: INFO: >>> kubeConfig: /root/.kube/config May 23 23:41:52.245: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:42:03.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8605" for this suite. • [SLOW TEST:23.615 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":288,"completed":30,"skipped":585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:42:03.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions May 23 23:42:03.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config api-versions' May 23 23:42:03.412: INFO: stderr: "" May 23 23:42:03.412: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:42:03.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8166" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":288,"completed":31,"skipped":627,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:42:03.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9648.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9648.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 23 23:42:11.636: INFO: DNS probes using dns-9648/dns-test-5bd3011e-1626-4b41-aa60-442d524869cf succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:42:11.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9648" for this suite. • [SLOW TEST:8.292 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":288,"completed":32,"skipped":631,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:42:11.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 23 23:42:11.822: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cdcd59ec-5a84-4a69-8309-7b80f23bf78a" in namespace "projected-6123" to be "Succeeded or Failed" May 23 23:42:12.067: INFO: Pod "downwardapi-volume-cdcd59ec-5a84-4a69-8309-7b80f23bf78a": Phase="Pending", Reason="", readiness=false. Elapsed: 244.446893ms May 23 23:42:14.070: INFO: Pod "downwardapi-volume-cdcd59ec-5a84-4a69-8309-7b80f23bf78a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.247962204s May 23 23:42:16.074: INFO: Pod "downwardapi-volume-cdcd59ec-5a84-4a69-8309-7b80f23bf78a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.251876965s STEP: Saw pod success May 23 23:42:16.074: INFO: Pod "downwardapi-volume-cdcd59ec-5a84-4a69-8309-7b80f23bf78a" satisfied condition "Succeeded or Failed" May 23 23:42:16.077: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-cdcd59ec-5a84-4a69-8309-7b80f23bf78a container client-container: STEP: delete the pod May 23 23:42:16.169: INFO: Waiting for pod downwardapi-volume-cdcd59ec-5a84-4a69-8309-7b80f23bf78a to disappear May 23 23:42:16.179: INFO: Pod downwardapi-volume-cdcd59ec-5a84-4a69-8309-7b80f23bf78a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:42:16.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6123" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":288,"completed":33,"skipped":640,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:42:16.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 23 23:42:16.319: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53c54f52-28eb-4447-914c-a69fb598d2fc" in namespace "downward-api-7751" to be "Succeeded or Failed" May 23 23:42:16.339: INFO: Pod "downwardapi-volume-53c54f52-28eb-4447-914c-a69fb598d2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.273376ms May 23 23:42:18.343: INFO: Pod "downwardapi-volume-53c54f52-28eb-4447-914c-a69fb598d2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024193453s May 23 23:42:20.347: INFO: Pod "downwardapi-volume-53c54f52-28eb-4447-914c-a69fb598d2fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028041451s STEP: Saw pod success May 23 23:42:20.347: INFO: Pod "downwardapi-volume-53c54f52-28eb-4447-914c-a69fb598d2fc" satisfied condition "Succeeded or Failed" May 23 23:42:20.350: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-53c54f52-28eb-4447-914c-a69fb598d2fc container client-container: STEP: delete the pod May 23 23:42:20.387: INFO: Waiting for pod downwardapi-volume-53c54f52-28eb-4447-914c-a69fb598d2fc to disappear May 23 23:42:20.449: INFO: Pod downwardapi-volume-53c54f52-28eb-4447-914c-a69fb598d2fc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:42:20.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7751" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":34,"skipped":651,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:42:20.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-02345a0a-bff1-4094-9387-03860f3b5a66 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:42:26.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1475" for this suite. • [SLOW TEST:6.177 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":35,"skipped":657,"failed":0} [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:42:26.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:42:44.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2817" for this suite. • [SLOW TEST:18.084 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":288,"completed":36,"skipped":657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:42:44.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 23 23:42:50.902: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8785 PodName:pod-sharedvolume-cf73fd88-463c-4e31-be70-52edbc8860a5 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 23 23:42:50.902: INFO: >>> kubeConfig: /root/.kube/config I0523 23:42:50.934904 7 log.go:172] (0xc002c2c6e0) (0xc000b86dc0) Create stream I0523 23:42:50.934934 7 log.go:172] (0xc002c2c6e0) (0xc000b86dc0) Stream added, broadcasting: 1 I0523 23:42:50.936873 7 log.go:172] (0xc002c2c6e0) Reply frame received for 1 I0523 23:42:50.936919 7 log.go:172] (0xc002c2c6e0) (0xc000644820) Create stream I0523 23:42:50.936932 7 log.go:172] (0xc002c2c6e0) (0xc000644820) Stream added, broadcasting: 3 I0523 23:42:50.938124 7 log.go:172] (0xc002c2c6e0) Reply frame received for 3 I0523 23:42:50.938154 7 log.go:172] (0xc002c2c6e0) (0xc000d48dc0) Create stream I0523 23:42:50.938168 7 log.go:172] (0xc002c2c6e0) (0xc000d48dc0) Stream added, broadcasting: 5 I0523 23:42:50.939002 7 log.go:172] (0xc002c2c6e0) Reply frame received for 5 I0523 23:42:51.026826 7 log.go:172] (0xc002c2c6e0) Data frame received for 5 I0523 23:42:51.026861 7 log.go:172] (0xc000d48dc0) (5) Data frame handling I0523 23:42:51.026884 7 log.go:172] (0xc002c2c6e0) Data frame received for 3 I0523 23:42:51.026900 7 log.go:172] (0xc000644820) (3) Data frame handling I0523 23:42:51.026923 7 log.go:172] (0xc000644820) (3) Data frame sent I0523 23:42:51.026931 7 log.go:172] (0xc002c2c6e0) Data frame received for 3 I0523 23:42:51.026943 7 log.go:172] (0xc000644820) (3) Data frame handling I0523 23:42:51.028229 7 log.go:172] (0xc002c2c6e0) Data frame received for 1 I0523 23:42:51.028244 7 log.go:172] (0xc000b86dc0) (1) Data frame handling I0523 23:42:51.028256 7 log.go:172] (0xc000b86dc0) (1) Data frame sent I0523 23:42:51.028267 7 log.go:172] (0xc002c2c6e0) (0xc000b86dc0) Stream removed, broadcasting: 1 I0523 23:42:51.028384 7 log.go:172] (0xc002c2c6e0) Go away received I0523 23:42:51.028589 7 log.go:172] (0xc002c2c6e0) (0xc000b86dc0) Stream removed, broadcasting: 1 I0523 23:42:51.028608 7 log.go:172] (0xc002c2c6e0) (0xc000644820) Stream removed, broadcasting: 3 I0523 23:42:51.028620 7 log.go:172] (0xc002c2c6e0) (0xc000d48dc0) Stream removed, broadcasting: 5 May 23 23:42:51.028: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:42:51.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8785" for this suite. • [SLOW TEST:6.317 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":288,"completed":37,"skipped":690,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:42:51.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 23 23:42:55.673: INFO: Successfully updated pod "labelsupdate12b81f31-3771-4c93-93f3-acf38092547a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:42:59.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8566" for this suite. • [SLOW TEST:8.681 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":38,"skipped":730,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:42:59.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-5109/secret-test-802bd7bd-3abc-4cf9-b239-82dcc22f4b67 STEP: Creating a pod to test consume secrets May 23 23:42:59.794: INFO: Waiting up to 5m0s for pod "pod-configmaps-df439eed-f27f-4f89-b4c3-6d26e15e5c56" in namespace "secrets-5109" to be "Succeeded or Failed" May 23 23:42:59.798: INFO: Pod "pod-configmaps-df439eed-f27f-4f89-b4c3-6d26e15e5c56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053641ms May 23 23:43:01.801: INFO: Pod "pod-configmaps-df439eed-f27f-4f89-b4c3-6d26e15e5c56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007719329s May 23 23:43:03.806: INFO: Pod "pod-configmaps-df439eed-f27f-4f89-b4c3-6d26e15e5c56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012068047s STEP: Saw pod success May 23 23:43:03.806: INFO: Pod "pod-configmaps-df439eed-f27f-4f89-b4c3-6d26e15e5c56" satisfied condition "Succeeded or Failed" May 23 23:43:03.809: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-df439eed-f27f-4f89-b4c3-6d26e15e5c56 container env-test: STEP: delete the pod May 23 23:43:03.866: INFO: Waiting for pod pod-configmaps-df439eed-f27f-4f89-b4c3-6d26e15e5c56 to disappear May 23 23:43:03.892: INFO: Pod pod-configmaps-df439eed-f27f-4f89-b4c3-6d26e15e5c56 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:43:03.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5109" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":39,"skipped":735,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:43:03.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-3551/configmap-test-b3798358-f307-4d94-b5a3-fad65cab77ad STEP: Creating a pod to test consume configMaps May 23 23:43:03.955: INFO: Waiting up to 5m0s for pod "pod-configmaps-286cec0d-ed21-4c17-9dc6-5f6c2ec6a714" in namespace "configmap-3551" to be "Succeeded or Failed" May 23 23:43:03.972: INFO: Pod "pod-configmaps-286cec0d-ed21-4c17-9dc6-5f6c2ec6a714": Phase="Pending", Reason="", readiness=false. Elapsed: 17.085973ms May 23 23:43:05.976: INFO: Pod "pod-configmaps-286cec0d-ed21-4c17-9dc6-5f6c2ec6a714": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021263543s May 23 23:43:07.980: INFO: Pod "pod-configmaps-286cec0d-ed21-4c17-9dc6-5f6c2ec6a714": Phase="Running", Reason="", readiness=true. Elapsed: 4.025190471s May 23 23:43:09.984: INFO: Pod "pod-configmaps-286cec0d-ed21-4c17-9dc6-5f6c2ec6a714": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029546545s STEP: Saw pod success May 23 23:43:09.984: INFO: Pod "pod-configmaps-286cec0d-ed21-4c17-9dc6-5f6c2ec6a714" satisfied condition "Succeeded or Failed" May 23 23:43:09.987: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-286cec0d-ed21-4c17-9dc6-5f6c2ec6a714 container env-test: STEP: delete the pod May 23 23:43:10.013: INFO: Waiting for pod pod-configmaps-286cec0d-ed21-4c17-9dc6-5f6c2ec6a714 to disappear May 23 23:43:10.015: INFO: Pod pod-configmaps-286cec0d-ed21-4c17-9dc6-5f6c2ec6a714 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:43:10.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3551" for this suite. • [SLOW TEST:6.128 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":288,"completed":40,"skipped":743,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:43:10.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6802 STEP: creating a selector STEP: Creating the service pods in kubernetes May 23 23:43:10.100: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 23 23:43:10.223: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 23 23:43:12.227: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 23 23:43:14.234: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:43:16.228: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:43:18.228: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:43:20.234: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:43:22.227: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:43:24.228: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:43:26.228: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:43:28.240: INFO: The status of Pod netserver-0 is Running (Ready = true) May 23 23:43:28.245: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 23 23:43:32.266: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.77:8080/dial?request=hostname&protocol=udp&host=10.244.1.76&port=8081&tries=1'] Namespace:pod-network-test-6802 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 23 23:43:32.266: INFO: >>> kubeConfig: /root/.kube/config I0523 23:43:32.295754 7 log.go:172] (0xc0035ea840) (0xc001413ae0) Create stream I0523 23:43:32.295781 7 log.go:172] (0xc0035ea840) (0xc001413ae0) Stream added, broadcasting: 1 I0523 23:43:32.298357 7 log.go:172] (0xc0035ea840) Reply frame received for 1 I0523 23:43:32.298427 7 log.go:172] (0xc0035ea840) (0xc001780000) Create stream I0523 23:43:32.298442 7 log.go:172] (0xc0035ea840) (0xc001780000) Stream added, broadcasting: 3 I0523 23:43:32.299520 7 log.go:172] (0xc0035ea840) Reply frame received for 3 I0523 23:43:32.299564 7 log.go:172] (0xc0035ea840) (0xc001223900) Create stream I0523 23:43:32.299581 7 log.go:172] (0xc0035ea840) (0xc001223900) Stream added, broadcasting: 5 I0523 23:43:32.300561 7 log.go:172] (0xc0035ea840) Reply frame received for 5 I0523 23:43:32.444509 7 log.go:172] (0xc0035ea840) Data frame received for 3 I0523 23:43:32.444539 7 log.go:172] (0xc001780000) (3) Data frame handling I0523 23:43:32.444557 7 log.go:172] (0xc001780000) (3) Data frame sent I0523 23:43:32.445505 7 log.go:172] (0xc0035ea840) Data frame received for 3 I0523 23:43:32.445544 7 log.go:172] (0xc001780000) (3) Data frame handling I0523 23:43:32.445857 7 log.go:172] (0xc0035ea840) Data frame received for 5 I0523 23:43:32.445894 7 log.go:172] (0xc001223900) (5) Data frame handling I0523 23:43:32.447429 7 log.go:172] (0xc0035ea840) Data frame received for 1 I0523 23:43:32.447461 7 log.go:172] (0xc001413ae0) (1) Data frame handling I0523 23:43:32.447478 7 log.go:172] (0xc001413ae0) (1) Data frame sent I0523 23:43:32.447529 7 log.go:172] (0xc0035ea840) (0xc001413ae0) Stream removed, broadcasting: 1 I0523 23:43:32.447601 7 log.go:172] (0xc0035ea840) Go away received I0523 23:43:32.447730 7 log.go:172] (0xc0035ea840) (0xc001413ae0) Stream removed, broadcasting: 1 I0523 23:43:32.447764 7 log.go:172] (0xc0035ea840) (0xc001780000) Stream removed, broadcasting: 3 I0523 23:43:32.447780 7 log.go:172] (0xc0035ea840) (0xc001223900) Stream removed, broadcasting: 5 May 23 23:43:32.447: INFO: Waiting for responses: map[] May 23 23:43:32.451: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.77:8080/dial?request=hostname&protocol=udp&host=10.244.2.58&port=8081&tries=1'] Namespace:pod-network-test-6802 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 23 23:43:32.451: INFO: >>> kubeConfig: /root/.kube/config I0523 23:43:32.482870 7 log.go:172] (0xc002c2cf20) (0xc001223c20) Create stream I0523 23:43:32.482915 7 log.go:172] (0xc002c2cf20) (0xc001223c20) Stream added, broadcasting: 1 I0523 23:43:32.485029 7 log.go:172] (0xc002c2cf20) Reply frame received for 1 I0523 23:43:32.485074 7 log.go:172] (0xc002c2cf20) (0xc0014aed20) Create stream I0523 23:43:32.485089 7 log.go:172] (0xc002c2cf20) (0xc0014aed20) Stream added, broadcasting: 3 I0523 23:43:32.486048 7 log.go:172] (0xc002c2cf20) Reply frame received for 3 I0523 23:43:32.486080 7 log.go:172] (0xc002c2cf20) (0xc0012aaf00) Create stream I0523 23:43:32.486091 7 log.go:172] (0xc002c2cf20) (0xc0012aaf00) Stream added, broadcasting: 5 I0523 23:43:32.487004 7 log.go:172] (0xc002c2cf20) Reply frame received for 5 I0523 23:43:32.556806 7 log.go:172] (0xc002c2cf20) Data frame received for 3 I0523 23:43:32.556853 7 log.go:172] (0xc0014aed20) (3) Data frame handling I0523 23:43:32.556887 7 log.go:172] (0xc0014aed20) (3) Data frame sent I0523 23:43:32.558021 7 log.go:172] (0xc002c2cf20) Data frame received for 3 I0523 23:43:32.558036 7 log.go:172] (0xc0014aed20) (3) Data frame handling I0523 23:43:32.558063 7 log.go:172] (0xc002c2cf20) Data frame received for 5 I0523 23:43:32.558082 7 log.go:172] (0xc0012aaf00) (5) Data frame handling I0523 23:43:32.559853 7 log.go:172] (0xc002c2cf20) Data frame received for 1 I0523 23:43:32.560033 7 log.go:172] (0xc001223c20) (1) Data frame handling I0523 23:43:32.560099 7 log.go:172] (0xc001223c20) (1) Data frame sent I0523 23:43:32.560130 7 log.go:172] (0xc002c2cf20) (0xc001223c20) Stream removed, broadcasting: 1 I0523 23:43:32.560146 7 log.go:172] (0xc002c2cf20) Go away received I0523 23:43:32.560285 7 log.go:172] (0xc002c2cf20) (0xc001223c20) Stream removed, broadcasting: 1 I0523 23:43:32.560314 7 log.go:172] (0xc002c2cf20) (0xc0014aed20) Stream removed, broadcasting: 3 I0523 23:43:32.560327 7 log.go:172] (0xc002c2cf20) (0xc0012aaf00) Stream removed, broadcasting: 5 May 23 23:43:32.560: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:43:32.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6802" for this suite. • [SLOW TEST:22.536 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":288,"completed":41,"skipped":745,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:43:32.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-31543e6c-81f3-44ba-b574-e0ed45ae30b8 STEP: Creating a pod to test consume configMaps May 23 23:43:32.655: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-75cf2603-d540-4d88-8df8-763c1413fe72" in namespace "projected-2243" to be "Succeeded or Failed" May 23 23:43:32.695: INFO: Pod "pod-projected-configmaps-75cf2603-d540-4d88-8df8-763c1413fe72": Phase="Pending", Reason="", readiness=false. Elapsed: 40.682401ms May 23 23:43:34.700: INFO: Pod "pod-projected-configmaps-75cf2603-d540-4d88-8df8-763c1413fe72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045139733s May 23 23:43:36.705: INFO: Pod "pod-projected-configmaps-75cf2603-d540-4d88-8df8-763c1413fe72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04995089s STEP: Saw pod success May 23 23:43:36.705: INFO: Pod "pod-projected-configmaps-75cf2603-d540-4d88-8df8-763c1413fe72" satisfied condition "Succeeded or Failed" May 23 23:43:36.708: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-75cf2603-d540-4d88-8df8-763c1413fe72 container projected-configmap-volume-test: STEP: delete the pod May 23 23:43:36.740: INFO: Waiting for pod pod-projected-configmaps-75cf2603-d540-4d88-8df8-763c1413fe72 to disappear May 23 23:43:36.772: INFO: Pod pod-projected-configmaps-75cf2603-d540-4d88-8df8-763c1413fe72 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:43:36.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2243" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":42,"skipped":759,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:43:36.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:43:45.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-723" for this suite. • [SLOW TEST:8.247 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":288,"completed":43,"skipped":760,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:43:45.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 23 23:43:45.221: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8634 /api/v1/namespaces/watch-8634/configmaps/e2e-watch-test-watch-closed 0c3cec4a-239d-428d-92dd-4f4af8d7d01c 7143256 0 2020-05-23 23:43:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-23 23:43:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 23 23:43:45.222: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8634 /api/v1/namespaces/watch-8634/configmaps/e2e-watch-test-watch-closed 0c3cec4a-239d-428d-92dd-4f4af8d7d01c 7143257 0 2020-05-23 23:43:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-23 23:43:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 23 23:43:45.298: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8634 /api/v1/namespaces/watch-8634/configmaps/e2e-watch-test-watch-closed 0c3cec4a-239d-428d-92dd-4f4af8d7d01c 7143260 0 2020-05-23 23:43:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-23 23:43:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 23 23:43:45.298: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8634 /api/v1/namespaces/watch-8634/configmaps/e2e-watch-test-watch-closed 0c3cec4a-239d-428d-92dd-4f4af8d7d01c 7143261 0 2020-05-23 23:43:45 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-05-23 23:43:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:43:45.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8634" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":288,"completed":44,"skipped":766,"failed":0} ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:43:45.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy May 23 23:43:45.382: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix026908240/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:43:45.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7975" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":288,"completed":45,"skipped":766,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:43:45.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments May 23 23:43:45.621: INFO: Waiting up to 5m0s for pod "client-containers-fe89180f-0eaa-4fb1-9cc4-8cf6a0a07feb" in namespace "containers-6782" to be "Succeeded or Failed" May 23 23:43:45.625: INFO: Pod "client-containers-fe89180f-0eaa-4fb1-9cc4-8cf6a0a07feb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.590289ms May 23 23:43:47.629: INFO: Pod "client-containers-fe89180f-0eaa-4fb1-9cc4-8cf6a0a07feb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008499559s May 23 23:43:49.632: INFO: Pod "client-containers-fe89180f-0eaa-4fb1-9cc4-8cf6a0a07feb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011566052s STEP: Saw pod success May 23 23:43:49.632: INFO: Pod "client-containers-fe89180f-0eaa-4fb1-9cc4-8cf6a0a07feb" satisfied condition "Succeeded or Failed" May 23 23:43:49.635: INFO: Trying to get logs from node latest-worker pod client-containers-fe89180f-0eaa-4fb1-9cc4-8cf6a0a07feb container test-container: STEP: delete the pod May 23 23:43:49.691: INFO: Waiting for pod client-containers-fe89180f-0eaa-4fb1-9cc4-8cf6a0a07feb to disappear May 23 23:43:49.746: INFO: Pod client-containers-fe89180f-0eaa-4fb1-9cc4-8cf6a0a07feb no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:43:49.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6782" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":288,"completed":46,"skipped":769,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:43:49.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 23 23:43:49.886: INFO: Waiting up to 5m0s for pod "downward-api-b680f599-d3ab-4259-acc1-b6d05fe202d4" in namespace "downward-api-6176" to be "Succeeded or Failed" May 23 23:43:49.914: INFO: Pod "downward-api-b680f599-d3ab-4259-acc1-b6d05fe202d4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.095969ms May 23 23:43:51.918: INFO: Pod "downward-api-b680f599-d3ab-4259-acc1-b6d05fe202d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032511942s May 23 23:43:53.922: INFO: Pod "downward-api-b680f599-d3ab-4259-acc1-b6d05fe202d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036257404s STEP: Saw pod success May 23 23:43:53.922: INFO: Pod "downward-api-b680f599-d3ab-4259-acc1-b6d05fe202d4" satisfied condition "Succeeded or Failed" May 23 23:43:53.925: INFO: Trying to get logs from node latest-worker2 pod downward-api-b680f599-d3ab-4259-acc1-b6d05fe202d4 container dapi-container: STEP: delete the pod May 23 23:43:53.965: INFO: Waiting for pod downward-api-b680f599-d3ab-4259-acc1-b6d05fe202d4 to disappear May 23 23:43:53.979: INFO: Pod downward-api-b680f599-d3ab-4259-acc1-b6d05fe202d4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:43:53.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6176" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":288,"completed":47,"skipped":781,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:43:54.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:43:54.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9044' May 23 23:43:58.831: INFO: stderr: "" May 23 23:43:58.831: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 23 23:43:58.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9044' May 23 23:44:02.794: INFO: stderr: "" May 23 23:44:02.794: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 23 23:44:03.815: INFO: Selector matched 1 pods for map[app:agnhost] May 23 23:44:03.815: INFO: Found 0 / 1 May 23 23:44:04.803: INFO: Selector matched 1 pods for map[app:agnhost] May 23 23:44:04.804: INFO: Found 1 / 1 May 23 23:44:04.804: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 23 23:44:04.807: INFO: Selector matched 1 pods for map[app:agnhost] May 23 23:44:04.807: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 23 23:44:04.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe pod agnhost-master-8z8ff --namespace=kubectl-9044' May 23 23:44:04.927: INFO: stderr: "" May 23 23:44:04.927: INFO: stdout: "Name: agnhost-master-8z8ff\nNamespace: kubectl-9044\nPriority: 0\nNode: latest-worker/172.17.0.13\nStart Time: Sat, 23 May 2020 23:43:58 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.80\nIPs:\n IP: 10.244.1.80\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://2940ab7bf9b498e15798ad50665fd8dcf5522effd072eb924c8847af0badf12e\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 23 May 2020 23:44:03 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-55m8l (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-55m8l:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-55m8l\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-9044/agnhost-master-8z8ff to latest-worker\n Normal Pulled 3s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\" already present on machine\n Normal Created 2s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" May 23 23:44:04.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9044' May 23 23:44:05.060: INFO: stderr: "" May 23 23:44:05.061: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9044\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: agnhost-master-8z8ff\n" May 23 23:44:05.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9044' May 23 23:44:05.170: INFO: stderr: "" May 23 23:44:05.170: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9044\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.97.244.184\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.80:6379\nSession Affinity: None\nEvents: \n" May 23 23:44:05.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe node latest-control-plane' May 23 23:44:05.292: INFO: stderr: "" May 23 23:44:05.292: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 29 Apr 2020 09:53:29 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sat, 23 May 2020 23:44:03 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 23 May 2020 23:41:59 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 23 May 2020 23:41:59 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 23 May 2020 23:41:59 +0000 Wed, 29 Apr 2020 09:53:26 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 23 May 2020 23:41:59 +0000 Wed, 29 Apr 2020 09:54:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3939cf129c9d4d6e85e611ab996d9137\n System UUID: 2573ae1d-4849-412e-9a34-432f95556990\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.18.2\n Kube-Proxy Version: v1.18.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-8n5vh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 24d\n kube-system coredns-66bff467f8-qr7l5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 24d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kindnet-8x7pf 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 24d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kube-proxy-h8mhz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 24d\n local-path-storage local-path-provisioner-bd4bb6b75-bmf2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 23 23:44:05.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config describe namespace kubectl-9044' May 23 23:44:05.399: INFO: stderr: "" May 23 23:44:05.399: INFO: stdout: "Name: kubectl-9044\nLabels: e2e-framework=kubectl\n e2e-run=790663ad-edda-4dae-8310-5bd91a68e7e9\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:44:05.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9044" for this suite. • [SLOW TEST:11.384 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1083 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":288,"completed":48,"skipped":799,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:44:05.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 23 23:44:10.534: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:44:10.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4800" for this suite. • [SLOW TEST:5.730 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":288,"completed":49,"skipped":805,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:44:11.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-c63962c1-cfe9-408c-8f2e-399b7df27fe7 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:44:11.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8465" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":288,"completed":50,"skipped":817,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:44:11.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-92fw STEP: Creating a pod to test atomic-volume-subpath May 23 23:44:11.853: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-92fw" in namespace "subpath-2675" to be "Succeeded or Failed" May 23 23:44:11.907: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Pending", Reason="", readiness=false. Elapsed: 54.203853ms May 23 23:44:14.046: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192699625s May 23 23:44:16.050: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Running", Reason="", readiness=true. Elapsed: 4.197429412s May 23 23:44:18.055: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Running", Reason="", readiness=true. Elapsed: 6.202310371s May 23 23:44:20.059: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Running", Reason="", readiness=true. Elapsed: 8.206424814s May 23 23:44:22.079: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Running", Reason="", readiness=true. Elapsed: 10.225591015s May 23 23:44:24.082: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Running", Reason="", readiness=true. Elapsed: 12.22870569s May 23 23:44:26.085: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Running", Reason="", readiness=true. Elapsed: 14.232095437s May 23 23:44:28.089: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Running", Reason="", readiness=true. Elapsed: 16.236296222s May 23 23:44:30.093: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Running", Reason="", readiness=true. Elapsed: 18.240158081s May 23 23:44:32.097: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Running", Reason="", readiness=true. Elapsed: 20.244409701s May 23 23:44:34.101: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Running", Reason="", readiness=true. Elapsed: 22.248545924s May 23 23:44:36.106: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Running", Reason="", readiness=true. Elapsed: 24.252785213s May 23 23:44:38.110: INFO: Pod "pod-subpath-test-configmap-92fw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.256773943s STEP: Saw pod success May 23 23:44:38.110: INFO: Pod "pod-subpath-test-configmap-92fw" satisfied condition "Succeeded or Failed" May 23 23:44:38.113: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-configmap-92fw container test-container-subpath-configmap-92fw: STEP: delete the pod May 23 23:44:38.158: INFO: Waiting for pod pod-subpath-test-configmap-92fw to disappear May 23 23:44:38.165: INFO: Pod pod-subpath-test-configmap-92fw no longer exists STEP: Deleting pod pod-subpath-test-configmap-92fw May 23 23:44:38.165: INFO: Deleting pod "pod-subpath-test-configmap-92fw" in namespace "subpath-2675" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:44:38.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2675" for this suite. • [SLOW TEST:26.650 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":288,"completed":51,"skipped":820,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:44:38.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:44:38.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6231" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":288,"completed":52,"skipped":860,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:44:38.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5930 STEP: creating service affinity-clusterip in namespace services-5930 STEP: creating replication controller affinity-clusterip in namespace services-5930 I0523 23:44:38.633854 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-5930, replica count: 3 I0523 23:44:41.684221 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0523 23:44:44.684456 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 23 23:44:44.691: INFO: Creating new exec pod May 23 23:44:49.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5930 execpod-affinityhq7dj -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' May 23 23:44:50.025: INFO: stderr: "I0523 23:44:49.853909 222 log.go:172] (0xc000b72000) (0xc000257220) Create stream\nI0523 23:44:49.853959 222 log.go:172] (0xc000b72000) (0xc000257220) Stream added, broadcasting: 1\nI0523 23:44:49.855391 222 log.go:172] (0xc000b72000) Reply frame received for 1\nI0523 23:44:49.855425 222 log.go:172] (0xc000b72000) (0xc000b0e000) Create stream\nI0523 23:44:49.855437 222 log.go:172] (0xc000b72000) (0xc000b0e000) Stream added, broadcasting: 3\nI0523 23:44:49.856217 222 log.go:172] (0xc000b72000) Reply frame received for 3\nI0523 23:44:49.856273 222 log.go:172] (0xc000b72000) (0xc0003128c0) Create stream\nI0523 23:44:49.856295 222 log.go:172] (0xc000b72000) (0xc0003128c0) Stream added, broadcasting: 5\nI0523 23:44:49.857036 222 log.go:172] (0xc000b72000) Reply frame received for 5\nI0523 23:44:49.981035 222 log.go:172] (0xc000b72000) Data frame received for 5\nI0523 23:44:49.981065 222 log.go:172] (0xc0003128c0) (5) Data frame handling\nI0523 23:44:49.981087 222 log.go:172] (0xc0003128c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0523 23:44:50.017339 222 log.go:172] (0xc000b72000) Data frame received for 5\nI0523 23:44:50.017369 222 log.go:172] (0xc0003128c0) (5) Data frame handling\nI0523 23:44:50.017385 222 log.go:172] (0xc0003128c0) (5) Data frame sent\nI0523 23:44:50.017393 222 log.go:172] (0xc000b72000) Data frame received for 5\nI0523 23:44:50.017399 222 log.go:172] (0xc0003128c0) (5) Data frame handling\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0523 23:44:50.017737 222 log.go:172] (0xc000b72000) Data frame received for 3\nI0523 23:44:50.017758 222 log.go:172] (0xc000b0e000) (3) Data frame handling\nI0523 23:44:50.020284 222 log.go:172] (0xc000b72000) Data frame received for 1\nI0523 23:44:50.020322 222 log.go:172] (0xc000257220) (1) Data frame handling\nI0523 23:44:50.020346 222 log.go:172] (0xc000257220) (1) Data frame sent\nI0523 23:44:50.020366 222 log.go:172] (0xc000b72000) (0xc000257220) Stream removed, broadcasting: 1\nI0523 23:44:50.020870 222 log.go:172] (0xc000b72000) (0xc000257220) Stream removed, broadcasting: 1\nI0523 23:44:50.020903 222 log.go:172] (0xc000b72000) (0xc000b0e000) Stream removed, broadcasting: 3\nI0523 23:44:50.021086 222 log.go:172] (0xc000b72000) (0xc0003128c0) Stream removed, broadcasting: 5\n" May 23 23:44:50.025: INFO: stdout: "" May 23 23:44:50.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5930 execpod-affinityhq7dj -- /bin/sh -x -c nc -zv -t -w 2 10.96.10.191 80' May 23 23:44:50.236: INFO: stderr: "I0523 23:44:50.154310 243 log.go:172] (0xc00003abb0) (0xc000256c80) Create stream\nI0523 23:44:50.154374 243 log.go:172] (0xc00003abb0) (0xc000256c80) Stream added, broadcasting: 1\nI0523 23:44:50.157031 243 log.go:172] (0xc00003abb0) Reply frame received for 1\nI0523 23:44:50.157076 243 log.go:172] (0xc00003abb0) (0xc0004fe780) Create stream\nI0523 23:44:50.157085 243 log.go:172] (0xc00003abb0) (0xc0004fe780) Stream added, broadcasting: 3\nI0523 23:44:50.158094 243 log.go:172] (0xc00003abb0) Reply frame received for 3\nI0523 23:44:50.158134 243 log.go:172] (0xc00003abb0) (0xc0008fa960) Create stream\nI0523 23:44:50.158144 243 log.go:172] (0xc00003abb0) (0xc0008fa960) Stream added, broadcasting: 5\nI0523 23:44:50.159105 243 log.go:172] (0xc00003abb0) Reply frame received for 5\nI0523 23:44:50.229041 243 log.go:172] (0xc00003abb0) Data frame received for 5\nI0523 23:44:50.229300 243 log.go:172] (0xc0008fa960) (5) Data frame handling\nI0523 23:44:50.229334 243 log.go:172] (0xc0008fa960) (5) Data frame sent\nI0523 23:44:50.229352 243 log.go:172] (0xc00003abb0) Data frame received for 5\nI0523 23:44:50.229379 243 log.go:172] (0xc0008fa960) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.10.191 80\nConnection to 10.96.10.191 80 port [tcp/http] succeeded!\nI0523 23:44:50.229536 243 log.go:172] (0xc00003abb0) Data frame received for 3\nI0523 23:44:50.229591 243 log.go:172] (0xc0004fe780) (3) Data frame handling\nI0523 23:44:50.230798 243 log.go:172] (0xc00003abb0) Data frame received for 1\nI0523 23:44:50.230829 243 log.go:172] (0xc000256c80) (1) Data frame handling\nI0523 23:44:50.230857 243 log.go:172] (0xc000256c80) (1) Data frame sent\nI0523 23:44:50.230891 243 log.go:172] (0xc00003abb0) (0xc000256c80) Stream removed, broadcasting: 1\nI0523 23:44:50.231036 243 log.go:172] (0xc00003abb0) Go away received\nI0523 23:44:50.231195 243 log.go:172] (0xc00003abb0) (0xc000256c80) Stream removed, broadcasting: 1\nI0523 23:44:50.231217 243 log.go:172] (0xc00003abb0) (0xc0004fe780) Stream removed, broadcasting: 3\nI0523 23:44:50.231233 243 log.go:172] (0xc00003abb0) (0xc0008fa960) Stream removed, broadcasting: 5\n" May 23 23:44:50.236: INFO: stdout: "" May 23 23:44:50.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5930 execpod-affinityhq7dj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.10.191:80/ ; done' May 23 23:44:50.581: INFO: stderr: "I0523 23:44:50.377656 263 log.go:172] (0xc000aeac60) (0xc0005d0fa0) Create stream\nI0523 23:44:50.377730 263 log.go:172] (0xc000aeac60) (0xc0005d0fa0) Stream added, broadcasting: 1\nI0523 23:44:50.380943 263 log.go:172] (0xc000aeac60) Reply frame received for 1\nI0523 23:44:50.381008 263 log.go:172] (0xc000aeac60) (0xc000516be0) Create stream\nI0523 23:44:50.381035 263 log.go:172] (0xc000aeac60) (0xc000516be0) Stream added, broadcasting: 3\nI0523 23:44:50.382570 263 log.go:172] (0xc000aeac60) Reply frame received for 3\nI0523 23:44:50.382632 263 log.go:172] (0xc000aeac60) (0xc000253ea0) Create stream\nI0523 23:44:50.382648 263 log.go:172] (0xc000aeac60) (0xc000253ea0) Stream added, broadcasting: 5\nI0523 23:44:50.383650 263 log.go:172] (0xc000aeac60) Reply frame received for 5\nI0523 23:44:50.450561 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.450598 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.450611 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.450630 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.450638 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.450647 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.480221 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.480270 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.480303 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.480421 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.480437 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.480444 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.480453 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.480460 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.480466 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.488498 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.488524 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.488543 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.489052 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.489068 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.489079 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.489101 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.489356 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.489392 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.496187 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.496204 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.496216 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.496828 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.496849 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.496868 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.496915 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.496942 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.496951 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.503398 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.503422 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.503449 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.504179 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.504253 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.504274 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.504298 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.504312 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.504329 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.508798 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.508818 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.508841 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.509442 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.509461 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.509480 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.509571 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.509587 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.509600 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.514214 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.514239 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.514254 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.514702 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.514728 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.514742 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.514766 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.514780 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.514800 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.522858 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.522881 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.522898 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.523278 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.523290 263 log.go:172] (0xc000253ea0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.523304 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.523327 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.523350 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.523369 263 log.go:172] (0xc000253ea0) (5) Data frame sent\nI0523 23:44:50.529538 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.529558 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.529569 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.529740 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.529760 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.529772 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.529794 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.529812 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.529826 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.533430 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.533449 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.533463 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.533877 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.533901 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.533954 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.533971 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.533996 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.534018 263 log.go:172] (0xc000253ea0) (5) Data frame sent\nI0523 23:44:50.534029 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.534039 263 log.go:172] (0xc000253ea0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.534056 263 log.go:172] (0xc000253ea0) (5) Data frame sent\nI0523 23:44:50.537091 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.537251 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.537275 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.537717 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.537742 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.537765 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.537792 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.537809 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.537836 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.544565 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.544585 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.544594 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.545382 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.545413 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.545424 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.545436 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.545442 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.545450 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.550884 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.550912 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.550946 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.551320 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.551413 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.551545 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.551735 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.551809 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.551847 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.557591 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.557604 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.557617 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.558148 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.558161 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.558175 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.558183 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.558199 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.558213 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.562464 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.562488 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.562503 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.562917 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.562935 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.562941 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.562951 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.562959 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.562967 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.567726 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.567739 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.567832 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.568335 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.568351 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.568358 263 log.go:172] (0xc000253ea0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.10.191:80/\nI0523 23:44:50.568367 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.568375 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.568389 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.574327 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.574349 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.574368 263 log.go:172] (0xc000516be0) (3) Data frame sent\nI0523 23:44:50.575109 263 log.go:172] (0xc000aeac60) Data frame received for 3\nI0523 23:44:50.575146 263 log.go:172] (0xc000516be0) (3) Data frame handling\nI0523 23:44:50.575276 263 log.go:172] (0xc000aeac60) Data frame received for 5\nI0523 23:44:50.575291 263 log.go:172] (0xc000253ea0) (5) Data frame handling\nI0523 23:44:50.576975 263 log.go:172] (0xc000aeac60) Data frame received for 1\nI0523 23:44:50.576992 263 log.go:172] (0xc0005d0fa0) (1) Data frame handling\nI0523 23:44:50.577008 263 log.go:172] (0xc0005d0fa0) (1) Data frame sent\nI0523 23:44:50.577077 263 log.go:172] (0xc000aeac60) (0xc0005d0fa0) Stream removed, broadcasting: 1\nI0523 23:44:50.577227 263 log.go:172] (0xc000aeac60) Go away received\nI0523 23:44:50.577467 263 log.go:172] (0xc000aeac60) (0xc0005d0fa0) Stream removed, broadcasting: 1\nI0523 23:44:50.577488 263 log.go:172] (0xc000aeac60) (0xc000516be0) Stream removed, broadcasting: 3\nI0523 23:44:50.577499 263 log.go:172] (0xc000aeac60) (0xc000253ea0) Stream removed, broadcasting: 5\n" May 23 23:44:50.582: INFO: stdout: "\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4\naffinity-clusterip-p89x4" May 23 23:44:50.582: INFO: Received response from host: May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Received response from host: affinity-clusterip-p89x4 May 23 23:44:50.582: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-5930, will wait for the garbage collector to delete the pods May 23 23:44:50.728: INFO: Deleting ReplicationController affinity-clusterip took: 32.120072ms May 23 23:44:51.128: INFO: Terminating ReplicationController affinity-clusterip pods took: 400.290954ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:45:05.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5930" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:27.064 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":53,"skipped":881,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:45:05.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:45:05.438: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:45:06.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5192" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":288,"completed":54,"skipped":887,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:45:06.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 23 23:45:06.220: INFO: Pod name pod-release: Found 0 pods out of 1 May 23 23:45:11.253: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:45:11.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1850" for this suite. • [SLOW TEST:5.319 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":288,"completed":55,"skipped":1044,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:45:11.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-pffg STEP: Creating a pod to test atomic-volume-subpath May 23 23:45:11.558: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pffg" in namespace "subpath-4167" to be "Succeeded or Failed" May 23 23:45:11.578: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Pending", Reason="", readiness=false. Elapsed: 20.42395ms May 23 23:45:13.643: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085004565s May 23 23:45:15.666: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Running", Reason="", readiness=true. Elapsed: 4.108491424s May 23 23:45:17.670: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Running", Reason="", readiness=true. Elapsed: 6.112605933s May 23 23:45:19.674: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Running", Reason="", readiness=true. Elapsed: 8.116722591s May 23 23:45:21.679: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Running", Reason="", readiness=true. Elapsed: 10.12102575s May 23 23:45:23.683: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Running", Reason="", readiness=true. Elapsed: 12.125302429s May 23 23:45:25.687: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Running", Reason="", readiness=true. Elapsed: 14.129320836s May 23 23:45:27.692: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Running", Reason="", readiness=true. Elapsed: 16.134499786s May 23 23:45:29.698: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Running", Reason="", readiness=true. Elapsed: 18.140048312s May 23 23:45:31.703: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Running", Reason="", readiness=true. Elapsed: 20.144894064s May 23 23:45:33.707: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Running", Reason="", readiness=true. Elapsed: 22.149195801s May 23 23:45:35.711: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Running", Reason="", readiness=true. Elapsed: 24.153294065s May 23 23:45:37.715: INFO: Pod "pod-subpath-test-secret-pffg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.156935914s STEP: Saw pod success May 23 23:45:37.715: INFO: Pod "pod-subpath-test-secret-pffg" satisfied condition "Succeeded or Failed" May 23 23:45:37.717: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-pffg container test-container-subpath-secret-pffg: STEP: delete the pod May 23 23:45:37.756: INFO: Waiting for pod pod-subpath-test-secret-pffg to disappear May 23 23:45:37.760: INFO: Pod pod-subpath-test-secret-pffg no longer exists STEP: Deleting pod pod-subpath-test-secret-pffg May 23 23:45:37.760: INFO: Deleting pod "pod-subpath-test-secret-pffg" in namespace "subpath-4167" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:45:37.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4167" for this suite. • [SLOW TEST:26.381 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":288,"completed":56,"skipped":1085,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:45:37.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:45:37.996: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-cf4e5d1c-04bd-4a80-88af-52efa18ddbba" in namespace "security-context-test-5361" to be "Succeeded or Failed" May 23 23:45:38.018: INFO: Pod "busybox-privileged-false-cf4e5d1c-04bd-4a80-88af-52efa18ddbba": Phase="Pending", Reason="", readiness=false. Elapsed: 21.94907ms May 23 23:45:40.023: INFO: Pod "busybox-privileged-false-cf4e5d1c-04bd-4a80-88af-52efa18ddbba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026086409s May 23 23:45:42.027: INFO: Pod "busybox-privileged-false-cf4e5d1c-04bd-4a80-88af-52efa18ddbba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0303367s May 23 23:45:42.027: INFO: Pod "busybox-privileged-false-cf4e5d1c-04bd-4a80-88af-52efa18ddbba" satisfied condition "Succeeded or Failed" May 23 23:45:42.039: INFO: Got logs for pod "busybox-privileged-false-cf4e5d1c-04bd-4a80-88af-52efa18ddbba": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:45:42.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5361" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":57,"skipped":1094,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:45:42.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:45:47.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9025" for this suite. • [SLOW TEST:5.345 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":288,"completed":58,"skipped":1137,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:45:47.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-eafedaf0-934f-4abd-9e5f-d33872532de7 STEP: Creating a pod to test consume configMaps May 23 23:45:47.529: INFO: Waiting up to 5m0s for pod "pod-configmaps-4658107c-6eab-4e39-8073-9acc79b8a3ce" in namespace "configmap-1441" to be "Succeeded or Failed" May 23 23:45:47.535: INFO: Pod "pod-configmaps-4658107c-6eab-4e39-8073-9acc79b8a3ce": Phase="Pending", Reason="", readiness=false. Elapsed: 5.29201ms May 23 23:45:49.612: INFO: Pod "pod-configmaps-4658107c-6eab-4e39-8073-9acc79b8a3ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082647667s May 23 23:45:51.616: INFO: Pod "pod-configmaps-4658107c-6eab-4e39-8073-9acc79b8a3ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086907129s STEP: Saw pod success May 23 23:45:51.616: INFO: Pod "pod-configmaps-4658107c-6eab-4e39-8073-9acc79b8a3ce" satisfied condition "Succeeded or Failed" May 23 23:45:51.619: INFO: Trying to get logs from node latest-worker pod pod-configmaps-4658107c-6eab-4e39-8073-9acc79b8a3ce container configmap-volume-test: STEP: delete the pod May 23 23:45:51.755: INFO: Waiting for pod pod-configmaps-4658107c-6eab-4e39-8073-9acc79b8a3ce to disappear May 23 23:45:51.767: INFO: Pod pod-configmaps-4658107c-6eab-4e39-8073-9acc79b8a3ce no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:45:51.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1441" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":59,"skipped":1156,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:45:51.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 23 23:45:55.939: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-6007 PodName:var-expansion-daa9cd0d-eb54-428e-8779-5f211b58daca ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 23 23:45:55.939: INFO: >>> kubeConfig: /root/.kube/config I0523 23:45:55.999969 7 log.go:172] (0xc00291d1e0) (0xc0023401e0) Create stream I0523 23:45:56.000012 7 log.go:172] (0xc00291d1e0) (0xc0023401e0) Stream added, broadcasting: 1 I0523 23:45:56.004103 7 log.go:172] (0xc00291d1e0) Reply frame received for 1 I0523 23:45:56.004141 7 log.go:172] (0xc00291d1e0) (0xc000b86780) Create stream I0523 23:45:56.004168 7 log.go:172] (0xc00291d1e0) (0xc000b86780) Stream added, broadcasting: 3 I0523 23:45:56.004962 7 log.go:172] (0xc00291d1e0) Reply frame received for 3 I0523 23:45:56.004985 7 log.go:172] (0xc00291d1e0) (0xc001baec80) Create stream I0523 23:45:56.004992 7 log.go:172] (0xc00291d1e0) (0xc001baec80) Stream added, broadcasting: 5 I0523 23:45:56.005958 7 log.go:172] (0xc00291d1e0) Reply frame received for 5 I0523 23:45:56.092817 7 log.go:172] (0xc00291d1e0) Data frame received for 3 I0523 23:45:56.092839 7 log.go:172] (0xc000b86780) (3) Data frame handling I0523 23:45:56.093002 7 log.go:172] (0xc00291d1e0) Data frame received for 5 I0523 23:45:56.093027 7 log.go:172] (0xc001baec80) (5) Data frame handling I0523 23:45:56.094494 7 log.go:172] (0xc00291d1e0) Data frame received for 1 I0523 23:45:56.094518 7 log.go:172] (0xc0023401e0) (1) Data frame handling I0523 23:45:56.094543 7 log.go:172] (0xc0023401e0) (1) Data frame sent I0523 23:45:56.094559 7 log.go:172] (0xc00291d1e0) (0xc0023401e0) Stream removed, broadcasting: 1 I0523 23:45:56.094638 7 log.go:172] (0xc00291d1e0) (0xc0023401e0) Stream removed, broadcasting: 1 I0523 23:45:56.094649 7 log.go:172] (0xc00291d1e0) (0xc000b86780) Stream removed, broadcasting: 3 I0523 23:45:56.094717 7 log.go:172] (0xc00291d1e0) Go away received I0523 23:45:56.094764 7 log.go:172] (0xc00291d1e0) (0xc001baec80) Stream removed, broadcasting: 5 STEP: test for file in mounted path May 23 23:45:56.109: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-6007 PodName:var-expansion-daa9cd0d-eb54-428e-8779-5f211b58daca ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 23 23:45:56.109: INFO: >>> kubeConfig: /root/.kube/config I0523 23:45:56.139971 7 log.go:172] (0xc002c2d550) (0xc000b87540) Create stream I0523 23:45:56.140014 7 log.go:172] (0xc002c2d550) (0xc000b87540) Stream added, broadcasting: 1 I0523 23:45:56.142018 7 log.go:172] (0xc002c2d550) Reply frame received for 1 I0523 23:45:56.142058 7 log.go:172] (0xc002c2d550) (0xc001baed20) Create stream I0523 23:45:56.142073 7 log.go:172] (0xc002c2d550) (0xc001baed20) Stream added, broadcasting: 3 I0523 23:45:56.142894 7 log.go:172] (0xc002c2d550) Reply frame received for 3 I0523 23:45:56.142913 7 log.go:172] (0xc002c2d550) (0xc000d48dc0) Create stream I0523 23:45:56.142923 7 log.go:172] (0xc002c2d550) (0xc000d48dc0) Stream added, broadcasting: 5 I0523 23:45:56.143768 7 log.go:172] (0xc002c2d550) Reply frame received for 5 I0523 23:45:56.208563 7 log.go:172] (0xc002c2d550) Data frame received for 5 I0523 23:45:56.208612 7 log.go:172] (0xc000d48dc0) (5) Data frame handling I0523 23:45:56.208635 7 log.go:172] (0xc002c2d550) Data frame received for 3 I0523 23:45:56.208646 7 log.go:172] (0xc001baed20) (3) Data frame handling I0523 23:45:56.210324 7 log.go:172] (0xc002c2d550) Data frame received for 1 I0523 23:45:56.210373 7 log.go:172] (0xc000b87540) (1) Data frame handling I0523 23:45:56.210413 7 log.go:172] (0xc000b87540) (1) Data frame sent I0523 23:45:56.210454 7 log.go:172] (0xc002c2d550) (0xc000b87540) Stream removed, broadcasting: 1 I0523 23:45:56.210545 7 log.go:172] (0xc002c2d550) (0xc000b87540) Stream removed, broadcasting: 1 I0523 23:45:56.210559 7 log.go:172] (0xc002c2d550) (0xc001baed20) Stream removed, broadcasting: 3 I0523 23:45:56.210706 7 log.go:172] (0xc002c2d550) (0xc000d48dc0) Stream removed, broadcasting: 5 STEP: updating the annotation value I0523 23:45:56.210847 7 log.go:172] (0xc002c2d550) Go away received May 23 23:45:56.719: INFO: Successfully updated pod "var-expansion-daa9cd0d-eb54-428e-8779-5f211b58daca" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 23 23:45:56.725: INFO: Deleting pod "var-expansion-daa9cd0d-eb54-428e-8779-5f211b58daca" in namespace "var-expansion-6007" May 23 23:45:56.729: INFO: Wait up to 5m0s for pod "var-expansion-daa9cd0d-eb54-428e-8779-5f211b58daca" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:46:32.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6007" for this suite. • [SLOW TEST:40.983 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":288,"completed":60,"skipped":1159,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:46:32.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all May 23 23:46:32.813: INFO: Waiting up to 5m0s for pod "client-containers-291112da-7635-42d4-8690-339aafdf2dd8" in namespace "containers-1103" to be "Succeeded or Failed" May 23 23:46:32.818: INFO: Pod "client-containers-291112da-7635-42d4-8690-339aafdf2dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.401674ms May 23 23:46:34.822: INFO: Pod "client-containers-291112da-7635-42d4-8690-339aafdf2dd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008680655s May 23 23:46:36.826: INFO: Pod "client-containers-291112da-7635-42d4-8690-339aafdf2dd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013148006s STEP: Saw pod success May 23 23:46:36.826: INFO: Pod "client-containers-291112da-7635-42d4-8690-339aafdf2dd8" satisfied condition "Succeeded or Failed" May 23 23:46:36.830: INFO: Trying to get logs from node latest-worker pod client-containers-291112da-7635-42d4-8690-339aafdf2dd8 container test-container: STEP: delete the pod May 23 23:46:36.902: INFO: Waiting for pod client-containers-291112da-7635-42d4-8690-339aafdf2dd8 to disappear May 23 23:46:36.912: INFO: Pod client-containers-291112da-7635-42d4-8690-339aafdf2dd8 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:46:36.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1103" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":288,"completed":61,"skipped":1170,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:46:36.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-879 STEP: creating service affinity-nodeport-transition in namespace services-879 STEP: creating replication controller affinity-nodeport-transition in namespace services-879 I0523 23:46:37.170281 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-879, replica count: 3 I0523 23:46:40.220661 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0523 23:46:43.220876 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 23 23:46:43.229: INFO: Creating new exec pod May 23 23:46:48.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-879 execpod-affinity92gcs -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' May 23 23:46:48.524: INFO: stderr: "I0523 23:46:48.444705 283 log.go:172] (0xc000a6ac60) (0xc00084c8c0) Create stream\nI0523 23:46:48.444761 283 log.go:172] (0xc000a6ac60) (0xc00084c8c0) Stream added, broadcasting: 1\nI0523 23:46:48.447174 283 log.go:172] (0xc000a6ac60) Reply frame received for 1\nI0523 23:46:48.447212 283 log.go:172] (0xc000a6ac60) (0xc000854460) Create stream\nI0523 23:46:48.447229 283 log.go:172] (0xc000a6ac60) (0xc000854460) Stream added, broadcasting: 3\nI0523 23:46:48.448114 283 log.go:172] (0xc000a6ac60) Reply frame received for 3\nI0523 23:46:48.448165 283 log.go:172] (0xc000a6ac60) (0xc000409ea0) Create stream\nI0523 23:46:48.448184 283 log.go:172] (0xc000a6ac60) (0xc000409ea0) Stream added, broadcasting: 5\nI0523 23:46:48.449081 283 log.go:172] (0xc000a6ac60) Reply frame received for 5\nI0523 23:46:48.517893 283 log.go:172] (0xc000a6ac60) Data frame received for 5\nI0523 23:46:48.517935 283 log.go:172] (0xc000409ea0) (5) Data frame handling\nI0523 23:46:48.517964 283 log.go:172] (0xc000409ea0) (5) Data frame sent\nI0523 23:46:48.517990 283 log.go:172] (0xc000a6ac60) Data frame received for 5\nI0523 23:46:48.518000 283 log.go:172] (0xc000409ea0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0523 23:46:48.518024 283 log.go:172] (0xc000409ea0) (5) Data frame sent\nI0523 23:46:48.518038 283 log.go:172] (0xc000a6ac60) Data frame received for 5\nI0523 23:46:48.518064 283 log.go:172] (0xc000409ea0) (5) Data frame handling\nI0523 23:46:48.518203 283 log.go:172] (0xc000a6ac60) Data frame received for 3\nI0523 23:46:48.518222 283 log.go:172] (0xc000854460) (3) Data frame handling\nI0523 23:46:48.520056 283 log.go:172] (0xc000a6ac60) Data frame received for 1\nI0523 23:46:48.520145 283 log.go:172] (0xc00084c8c0) (1) Data frame handling\nI0523 23:46:48.520186 283 log.go:172] (0xc00084c8c0) (1) Data frame sent\nI0523 23:46:48.520221 283 log.go:172] (0xc000a6ac60) (0xc00084c8c0) Stream removed, broadcasting: 1\nI0523 23:46:48.520244 283 log.go:172] (0xc000a6ac60) Go away received\nI0523 23:46:48.520616 283 log.go:172] (0xc000a6ac60) (0xc00084c8c0) Stream removed, broadcasting: 1\nI0523 23:46:48.520630 283 log.go:172] (0xc000a6ac60) (0xc000854460) Stream removed, broadcasting: 3\nI0523 23:46:48.520636 283 log.go:172] (0xc000a6ac60) (0xc000409ea0) Stream removed, broadcasting: 5\n" May 23 23:46:48.524: INFO: stdout: "" May 23 23:46:48.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-879 execpod-affinity92gcs -- /bin/sh -x -c nc -zv -t -w 2 10.108.248.82 80' May 23 23:46:48.753: INFO: stderr: "I0523 23:46:48.677980 304 log.go:172] (0xc000956000) (0xc000582140) Create stream\nI0523 23:46:48.678058 304 log.go:172] (0xc000956000) (0xc000582140) Stream added, broadcasting: 1\nI0523 23:46:48.681302 304 log.go:172] (0xc000956000) Reply frame received for 1\nI0523 23:46:48.681385 304 log.go:172] (0xc000956000) (0xc00064be00) Create stream\nI0523 23:46:48.681407 304 log.go:172] (0xc000956000) (0xc00064be00) Stream added, broadcasting: 3\nI0523 23:46:48.682670 304 log.go:172] (0xc000956000) Reply frame received for 3\nI0523 23:46:48.682701 304 log.go:172] (0xc000956000) (0xc00022ec80) Create stream\nI0523 23:46:48.682716 304 log.go:172] (0xc000956000) (0xc00022ec80) Stream added, broadcasting: 5\nI0523 23:46:48.683925 304 log.go:172] (0xc000956000) Reply frame received for 5\nI0523 23:46:48.746184 304 log.go:172] (0xc000956000) Data frame received for 5\nI0523 23:46:48.746234 304 log.go:172] (0xc00022ec80) (5) Data frame handling\nI0523 23:46:48.746255 304 log.go:172] (0xc00022ec80) (5) Data frame sent\nI0523 23:46:48.746267 304 log.go:172] (0xc000956000) Data frame received for 5\nI0523 23:46:48.746278 304 log.go:172] (0xc00022ec80) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.248.82 80\nConnection to 10.108.248.82 80 port [tcp/http] succeeded!\nI0523 23:46:48.746304 304 log.go:172] (0xc000956000) Data frame received for 3\nI0523 23:46:48.746318 304 log.go:172] (0xc00064be00) (3) Data frame handling\nI0523 23:46:48.747919 304 log.go:172] (0xc000956000) Data frame received for 1\nI0523 23:46:48.747972 304 log.go:172] (0xc000582140) (1) Data frame handling\nI0523 23:46:48.747994 304 log.go:172] (0xc000582140) (1) Data frame sent\nI0523 23:46:48.748029 304 log.go:172] (0xc000956000) (0xc000582140) Stream removed, broadcasting: 1\nI0523 23:46:48.748078 304 log.go:172] (0xc000956000) Go away received\nI0523 23:46:48.748636 304 log.go:172] (0xc000956000) (0xc000582140) Stream removed, broadcasting: 1\nI0523 23:46:48.748660 304 log.go:172] (0xc000956000) (0xc00064be00) Stream removed, broadcasting: 3\nI0523 23:46:48.748672 304 log.go:172] (0xc000956000) (0xc00022ec80) Stream removed, broadcasting: 5\n" May 23 23:46:48.754: INFO: stdout: "" May 23 23:46:48.754: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-879 execpod-affinity92gcs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31020' May 23 23:46:48.984: INFO: stderr: "I0523 23:46:48.901781 325 log.go:172] (0xc000a02e70) (0xc000b5a500) Create stream\nI0523 23:46:48.901841 325 log.go:172] (0xc000a02e70) (0xc000b5a500) Stream added, broadcasting: 1\nI0523 23:46:48.906961 325 log.go:172] (0xc000a02e70) Reply frame received for 1\nI0523 23:46:48.907012 325 log.go:172] (0xc000a02e70) (0xc0006990e0) Create stream\nI0523 23:46:48.907023 325 log.go:172] (0xc000a02e70) (0xc0006990e0) Stream added, broadcasting: 3\nI0523 23:46:48.908117 325 log.go:172] (0xc000a02e70) Reply frame received for 3\nI0523 23:46:48.908171 325 log.go:172] (0xc000a02e70) (0xc0005f6e60) Create stream\nI0523 23:46:48.908194 325 log.go:172] (0xc000a02e70) (0xc0005f6e60) Stream added, broadcasting: 5\nI0523 23:46:48.909408 325 log.go:172] (0xc000a02e70) Reply frame received for 5\nI0523 23:46:48.976522 325 log.go:172] (0xc000a02e70) Data frame received for 3\nI0523 23:46:48.976572 325 log.go:172] (0xc0006990e0) (3) Data frame handling\nI0523 23:46:48.976597 325 log.go:172] (0xc000a02e70) Data frame received for 5\nI0523 23:46:48.976605 325 log.go:172] (0xc0005f6e60) (5) Data frame handling\nI0523 23:46:48.976616 325 log.go:172] (0xc0005f6e60) (5) Data frame sent\nI0523 23:46:48.976632 325 log.go:172] (0xc000a02e70) Data frame received for 5\nI0523 23:46:48.976639 325 log.go:172] (0xc0005f6e60) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31020\nConnection to 172.17.0.13 31020 port [tcp/31020] succeeded!\nI0523 23:46:48.979083 325 log.go:172] (0xc000a02e70) Data frame received for 1\nI0523 23:46:48.979112 325 log.go:172] (0xc000b5a500) (1) Data frame handling\nI0523 23:46:48.979132 325 log.go:172] (0xc000b5a500) (1) Data frame sent\nI0523 23:46:48.979154 325 log.go:172] (0xc000a02e70) (0xc000b5a500) Stream removed, broadcasting: 1\nI0523 23:46:48.979203 325 log.go:172] (0xc000a02e70) Go away received\nI0523 23:46:48.979563 325 log.go:172] (0xc000a02e70) (0xc000b5a500) Stream removed, broadcasting: 1\nI0523 23:46:48.979590 325 log.go:172] (0xc000a02e70) (0xc0006990e0) Stream removed, broadcasting: 3\nI0523 23:46:48.979601 325 log.go:172] (0xc000a02e70) (0xc0005f6e60) Stream removed, broadcasting: 5\n" May 23 23:46:48.984: INFO: stdout: "" May 23 23:46:48.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-879 execpod-affinity92gcs -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31020' May 23 23:46:49.225: INFO: stderr: "I0523 23:46:49.126170 345 log.go:172] (0xc00003b130) (0xc0006acfa0) Create stream\nI0523 23:46:49.126222 345 log.go:172] (0xc00003b130) (0xc0006acfa0) Stream added, broadcasting: 1\nI0523 23:46:49.128709 345 log.go:172] (0xc00003b130) Reply frame received for 1\nI0523 23:46:49.128769 345 log.go:172] (0xc00003b130) (0xc0006bc640) Create stream\nI0523 23:46:49.128788 345 log.go:172] (0xc00003b130) (0xc0006bc640) Stream added, broadcasting: 3\nI0523 23:46:49.130094 345 log.go:172] (0xc00003b130) Reply frame received for 3\nI0523 23:46:49.130136 345 log.go:172] (0xc00003b130) (0xc0006ad540) Create stream\nI0523 23:46:49.130150 345 log.go:172] (0xc00003b130) (0xc0006ad540) Stream added, broadcasting: 5\nI0523 23:46:49.131328 345 log.go:172] (0xc00003b130) Reply frame received for 5\nI0523 23:46:49.216789 345 log.go:172] (0xc00003b130) Data frame received for 5\nI0523 23:46:49.216838 345 log.go:172] (0xc0006ad540) (5) Data frame handling\nI0523 23:46:49.216866 345 log.go:172] (0xc0006ad540) (5) Data frame sent\nI0523 23:46:49.216884 345 log.go:172] (0xc00003b130) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.12 31020\nConnection to 172.17.0.12 31020 port [tcp/31020] succeeded!\nI0523 23:46:49.216915 345 log.go:172] (0xc0006ad540) (5) Data frame handling\nI0523 23:46:49.217506 345 log.go:172] (0xc00003b130) Data frame received for 3\nI0523 23:46:49.217540 345 log.go:172] (0xc0006bc640) (3) Data frame handling\nI0523 23:46:49.219163 345 log.go:172] (0xc00003b130) Data frame received for 1\nI0523 23:46:49.219179 345 log.go:172] (0xc0006acfa0) (1) Data frame handling\nI0523 23:46:49.219187 345 log.go:172] (0xc0006acfa0) (1) Data frame sent\nI0523 23:46:49.219197 345 log.go:172] (0xc00003b130) (0xc0006acfa0) Stream removed, broadcasting: 1\nI0523 23:46:49.219204 345 log.go:172] (0xc00003b130) Go away received\nI0523 23:46:49.219755 345 log.go:172] (0xc00003b130) (0xc0006acfa0) Stream removed, broadcasting: 1\nI0523 23:46:49.219790 345 log.go:172] (0xc00003b130) (0xc0006bc640) Stream removed, broadcasting: 3\nI0523 23:46:49.219807 345 log.go:172] (0xc00003b130) (0xc0006ad540) Stream removed, broadcasting: 5\n" May 23 23:46:49.225: INFO: stdout: "" May 23 23:46:49.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-879 execpod-affinity92gcs -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31020/ ; done' May 23 23:46:49.604: INFO: stderr: "I0523 23:46:49.413714 365 log.go:172] (0xc000b55340) (0xc00060d360) Create stream\nI0523 23:46:49.413774 365 log.go:172] (0xc000b55340) (0xc00060d360) Stream added, broadcasting: 1\nI0523 23:46:49.416618 365 log.go:172] (0xc000b55340) Reply frame received for 1\nI0523 23:46:49.416657 365 log.go:172] (0xc000b55340) (0xc00060d9a0) Create stream\nI0523 23:46:49.416668 365 log.go:172] (0xc000b55340) (0xc00060d9a0) Stream added, broadcasting: 3\nI0523 23:46:49.418152 365 log.go:172] (0xc000b55340) Reply frame received for 3\nI0523 23:46:49.418188 365 log.go:172] (0xc000b55340) (0xc00034d400) Create stream\nI0523 23:46:49.418224 365 log.go:172] (0xc000b55340) (0xc00034d400) Stream added, broadcasting: 5\nI0523 23:46:49.419331 365 log.go:172] (0xc000b55340) Reply frame received for 5\nI0523 23:46:49.482809 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.482852 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.482866 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.482887 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.482895 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.482906 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.487020 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.487047 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.487068 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.487558 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.487588 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.487601 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.487620 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.487631 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.487648 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.510099 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.510131 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.510158 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.510724 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.510768 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.510789 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.510814 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.510833 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.510876 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.517821 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.517854 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.517887 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.518055 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.518082 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.518114 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.518187 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.518216 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.518246 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.526822 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.526850 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.526869 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.527575 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.527596 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.527608 365 log.go:172] (0xc000b55340) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.527634 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.527651 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.527667 365 log.go:172] (0xc00034d400) (5) Data frame sent\nI0523 23:46:49.535395 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.535419 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.535440 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.536220 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.536236 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.536256 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.536382 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.536400 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.536411 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.542268 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.542291 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.542305 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.542649 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.542664 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.542676 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.542728 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.542749 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.542766 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.548726 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.548768 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.548797 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.549316 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.549339 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.549355 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ echo\nI0523 23:46:49.549514 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.549686 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.549777 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.549810 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.549835 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.549855 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.554437 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.554519 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.554544 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.554720 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.554742 365 log.go:172] (0xc00034d400) (5) Data frame handling\n+ echo\n+ curl -q -sI0523 23:46:49.554761 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.554788 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.554809 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.554824 365 log.go:172] (0xc00034d400) (5) Data frame sent\nI0523 23:46:49.554833 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.554838 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.554844 365 log.go:172] (0xc00034d400) (5) Data frame sent\n --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.560278 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.560297 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.560311 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.560714 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.560741 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.560750 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.560759 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.560765 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.560777 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.565493 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.565516 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.565529 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.566203 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.566222 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.566235 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.566257 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.566286 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.566307 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.569981 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.569999 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.570015 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.570465 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.570490 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.570500 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.570521 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.570529 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.570538 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.575074 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.575094 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.575106 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.575534 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.575556 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.575564 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.575578 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.575591 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.575598 365 log.go:172] (0xc00034d400) (5) Data frame sent\nI0523 23:46:49.575604 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.575610 365 log.go:172] (0xc00034d400) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.575634 365 log.go:172] (0xc00034d400) (5) Data frame sent\nI0523 23:46:49.581074 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.581086 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.581093 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.581984 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.582009 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.582017 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.582055 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.582088 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.582116 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.585822 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.585836 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.585841 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.586292 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.586303 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.586310 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.586338 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.586354 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.586397 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.591352 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.591372 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.591394 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.591760 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.591775 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.591783 365 log.go:172] (0xc00034d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.591794 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.591802 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.591824 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.596840 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.596855 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.596872 365 log.go:172] (0xc00060d9a0) (3) Data frame sent\nI0523 23:46:49.597681 365 log.go:172] (0xc000b55340) Data frame received for 5\nI0523 23:46:49.597704 365 log.go:172] (0xc00034d400) (5) Data frame handling\nI0523 23:46:49.597733 365 log.go:172] (0xc000b55340) Data frame received for 3\nI0523 23:46:49.597747 365 log.go:172] (0xc00060d9a0) (3) Data frame handling\nI0523 23:46:49.599517 365 log.go:172] (0xc000b55340) Data frame received for 1\nI0523 23:46:49.599538 365 log.go:172] (0xc00060d360) (1) Data frame handling\nI0523 23:46:49.599549 365 log.go:172] (0xc00060d360) (1) Data frame sent\nI0523 23:46:49.599559 365 log.go:172] (0xc000b55340) (0xc00060d360) Stream removed, broadcasting: 1\nI0523 23:46:49.599571 365 log.go:172] (0xc000b55340) Go away received\nI0523 23:46:49.599949 365 log.go:172] (0xc000b55340) (0xc00060d360) Stream removed, broadcasting: 1\nI0523 23:46:49.599965 365 log.go:172] (0xc000b55340) (0xc00060d9a0) Stream removed, broadcasting: 3\nI0523 23:46:49.599972 365 log.go:172] (0xc000b55340) (0xc00034d400) Stream removed, broadcasting: 5\n" May 23 23:46:49.605: INFO: stdout: "\naffinity-nodeport-transition-hn7gd\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-xs56h\naffinity-nodeport-transition-xs56h\naffinity-nodeport-transition-xs56h\naffinity-nodeport-transition-xs56h\naffinity-nodeport-transition-hn7gd\naffinity-nodeport-transition-hn7gd\naffinity-nodeport-transition-xs56h\naffinity-nodeport-transition-hn7gd\naffinity-nodeport-transition-xs56h\naffinity-nodeport-transition-xs56h\naffinity-nodeport-transition-xs56h\naffinity-nodeport-transition-hn7gd\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq" May 23 23:46:49.605: INFO: Received response from host: May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-hn7gd May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-xs56h May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-xs56h May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-xs56h May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-xs56h May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-hn7gd May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-hn7gd May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-xs56h May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-hn7gd May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-xs56h May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-xs56h May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-xs56h May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-hn7gd May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.605: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-879 execpod-affinity92gcs -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31020/ ; done' May 23 23:46:49.912: INFO: stderr: "I0523 23:46:49.749226 385 log.go:172] (0xc000bbd290) (0xc000b68280) Create stream\nI0523 23:46:49.749294 385 log.go:172] (0xc000bbd290) (0xc000b68280) Stream added, broadcasting: 1\nI0523 23:46:49.754095 385 log.go:172] (0xc000bbd290) Reply frame received for 1\nI0523 23:46:49.754131 385 log.go:172] (0xc000bbd290) (0xc0006c0640) Create stream\nI0523 23:46:49.754142 385 log.go:172] (0xc000bbd290) (0xc0006c0640) Stream added, broadcasting: 3\nI0523 23:46:49.755185 385 log.go:172] (0xc000bbd290) Reply frame received for 3\nI0523 23:46:49.755233 385 log.go:172] (0xc000bbd290) (0xc0006aaaa0) Create stream\nI0523 23:46:49.755249 385 log.go:172] (0xc000bbd290) (0xc0006aaaa0) Stream added, broadcasting: 5\nI0523 23:46:49.756253 385 log.go:172] (0xc000bbd290) Reply frame received for 5\nI0523 23:46:49.821906 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.821933 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.821941 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.821957 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.821978 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.821996 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.825472 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.825501 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.825523 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.825994 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.826015 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.826027 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.826044 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.826053 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.826062 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.829411 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.829439 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.829458 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.829851 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.829867 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.829883 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.829907 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.829940 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.829990 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.838055 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.838173 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.838272 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.838570 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.838597 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.838609 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\nI0523 23:46:49.838620 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.838626 385 log.go:172] (0xc0006c0640) (3) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.838641 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.844012 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.844067 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.844095 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.844471 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.844503 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.844516 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.844537 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.844553 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.844569 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.850623 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.850650 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.850661 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.851391 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.851439 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.851451 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.851468 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.851480 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.851489 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.856088 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.856112 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.856130 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.856582 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.856615 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.856628 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.856647 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.856658 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.856667 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.862052 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.862088 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.862106 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.862579 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.862603 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.862620 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.862638 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.862657 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.862671 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.865941 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.865972 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.866004 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.866310 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.866324 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.866338 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.866349 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.866360 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.866372 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.869965 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.869977 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.869987 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.870539 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.870569 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.870579 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.870593 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.870601 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.870609 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.874973 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.875009 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.875044 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.875352 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.875373 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.875387 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.875415 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.875439 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.875450 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.879033 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.879067 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.879113 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.879457 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.879493 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.879516 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.879539 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.879550 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.879561 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.884113 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.884143 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.884160 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.884565 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.884591 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.884603 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.884618 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.884627 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.884646 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.889270 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.889298 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.889315 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.889701 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.889726 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.889737 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.889752 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.889759 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.889767 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.892836 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.892857 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.892879 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.893497 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.893524 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.893540 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.893556 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.893565 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.893575 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.898937 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.898948 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.898955 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.899891 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.899919 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.899930 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.899947 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.899965 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.899983 385 log.go:172] (0xc0006aaaa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31020/\nI0523 23:46:49.903469 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.903493 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.903533 385 log.go:172] (0xc0006c0640) (3) Data frame sent\nI0523 23:46:49.904025 385 log.go:172] (0xc000bbd290) Data frame received for 3\nI0523 23:46:49.904041 385 log.go:172] (0xc0006c0640) (3) Data frame handling\nI0523 23:46:49.904743 385 log.go:172] (0xc000bbd290) Data frame received for 5\nI0523 23:46:49.904761 385 log.go:172] (0xc0006aaaa0) (5) Data frame handling\nI0523 23:46:49.906471 385 log.go:172] (0xc000bbd290) Data frame received for 1\nI0523 23:46:49.906485 385 log.go:172] (0xc000b68280) (1) Data frame handling\nI0523 23:46:49.906495 385 log.go:172] (0xc000b68280) (1) Data frame sent\nI0523 23:46:49.906516 385 log.go:172] (0xc000bbd290) (0xc000b68280) Stream removed, broadcasting: 1\nI0523 23:46:49.906533 385 log.go:172] (0xc000bbd290) Go away received\nI0523 23:46:49.906884 385 log.go:172] (0xc000bbd290) (0xc000b68280) Stream removed, broadcasting: 1\nI0523 23:46:49.906908 385 log.go:172] (0xc000bbd290) (0xc0006c0640) Stream removed, broadcasting: 3\nI0523 23:46:49.906923 385 log.go:172] (0xc000bbd290) (0xc0006aaaa0) Stream removed, broadcasting: 5\n" May 23 23:46:49.913: INFO: stdout: "\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq\naffinity-nodeport-transition-bjqdq" May 23 23:46:49.913: INFO: Received response from host: May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Received response from host: affinity-nodeport-transition-bjqdq May 23 23:46:49.913: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-879, will wait for the garbage collector to delete the pods May 23 23:46:50.340: INFO: Deleting ReplicationController affinity-nodeport-transition took: 239.492924ms May 23 23:46:50.741: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 400.40303ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:46:56.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-879" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:19.094 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":62,"skipped":1189,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:46:56.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6023.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6023.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6023.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6023.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6023.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6023.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6023.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6023.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6023.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6023.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 23 23:47:02.256: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:02.259: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:02.262: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:02.265: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:02.275: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:02.279: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:02.282: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:02.285: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:02.291: INFO: Lookups using dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6023.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6023.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local jessie_udp@dns-test-service-2.dns-6023.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6023.svc.cluster.local] May 23 23:47:07.296: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:07.299: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:07.301: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:07.303: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:07.310: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:07.313: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:07.315: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:07.317: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:07.322: INFO: Lookups using dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6023.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6023.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local jessie_udp@dns-test-service-2.dns-6023.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6023.svc.cluster.local] May 23 23:47:12.296: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:12.299: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:12.302: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:12.305: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:12.313: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:12.316: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:12.319: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:12.322: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:12.328: INFO: Lookups using dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6023.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6023.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local jessie_udp@dns-test-service-2.dns-6023.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6023.svc.cluster.local] May 23 23:47:17.297: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:17.301: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:17.305: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:17.308: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:17.317: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:17.320: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:17.323: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:17.327: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:17.334: INFO: Lookups using dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6023.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6023.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local jessie_udp@dns-test-service-2.dns-6023.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6023.svc.cluster.local] May 23 23:47:22.308: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:22.312: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:22.320: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:22.323: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:22.331: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:22.334: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:22.337: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:22.340: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:22.345: INFO: Lookups using dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6023.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6023.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local jessie_udp@dns-test-service-2.dns-6023.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6023.svc.cluster.local] May 23 23:47:27.295: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:27.298: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:27.302: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:27.304: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:27.312: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:27.314: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:27.316: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:27.320: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6023.svc.cluster.local from pod dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8: the server could not find the requested resource (get pods dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8) May 23 23:47:27.325: INFO: Lookups using dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6023.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6023.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6023.svc.cluster.local jessie_udp@dns-test-service-2.dns-6023.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6023.svc.cluster.local] May 23 23:47:32.340: INFO: DNS probes using dns-6023/dns-test-02a5c36e-8015-4c95-899f-517d0c62dfd8 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:47:32.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6023" for this suite. • [SLOW TEST:36.452 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":288,"completed":63,"skipped":1201,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:47:32.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 23 23:47:38.257: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:47:38.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2608" for this suite. • [SLOW TEST:5.939 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":64,"skipped":1228,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:47:38.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-56320a92-e0ec-4522-8349-0c407b418bee STEP: Creating a pod to test consume secrets May 23 23:47:38.711: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2e38f43d-dfd3-4229-a372-806c0e9ea67f" in namespace "projected-6048" to be "Succeeded or Failed" May 23 23:47:38.736: INFO: Pod "pod-projected-secrets-2e38f43d-dfd3-4229-a372-806c0e9ea67f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.131674ms May 23 23:47:40.757: INFO: Pod "pod-projected-secrets-2e38f43d-dfd3-4229-a372-806c0e9ea67f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045620131s May 23 23:47:42.761: INFO: Pod "pod-projected-secrets-2e38f43d-dfd3-4229-a372-806c0e9ea67f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05021602s STEP: Saw pod success May 23 23:47:42.761: INFO: Pod "pod-projected-secrets-2e38f43d-dfd3-4229-a372-806c0e9ea67f" satisfied condition "Succeeded or Failed" May 23 23:47:42.764: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-2e38f43d-dfd3-4229-a372-806c0e9ea67f container projected-secret-volume-test: STEP: delete the pod May 23 23:47:42.866: INFO: Waiting for pod pod-projected-secrets-2e38f43d-dfd3-4229-a372-806c0e9ea67f to disappear May 23 23:47:42.907: INFO: Pod pod-projected-secrets-2e38f43d-dfd3-4229-a372-806c0e9ea67f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:47:42.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6048" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":65,"skipped":1249,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:47:42.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-k2wm STEP: Creating a pod to test atomic-volume-subpath May 23 23:47:43.059: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-k2wm" in namespace "subpath-8409" to be "Succeeded or Failed" May 23 23:47:43.063: INFO: Pod "pod-subpath-test-projected-k2wm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159429ms May 23 23:47:45.068: INFO: Pod "pod-subpath-test-projected-k2wm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008787849s May 23 23:47:47.072: INFO: Pod "pod-subpath-test-projected-k2wm": Phase="Running", Reason="", readiness=true. Elapsed: 4.013561734s May 23 23:47:49.077: INFO: Pod "pod-subpath-test-projected-k2wm": Phase="Running", Reason="", readiness=true. Elapsed: 6.018399967s May 23 23:47:51.081: INFO: Pod "pod-subpath-test-projected-k2wm": Phase="Running", Reason="", readiness=true. Elapsed: 8.022546742s May 23 23:47:53.086: INFO: Pod "pod-subpath-test-projected-k2wm": Phase="Running", Reason="", readiness=true. Elapsed: 10.026998137s May 23 23:47:55.090: INFO: Pod "pod-subpath-test-projected-k2wm": Phase="Running", Reason="", readiness=true. Elapsed: 12.031685794s May 23 23:47:57.095: INFO: Pod "pod-subpath-test-projected-k2wm": Phase="Running", Reason="", readiness=true. Elapsed: 14.036161395s May 23 23:47:59.100: INFO: Pod "pod-subpath-test-projected-k2wm": Phase="Running", Reason="", readiness=true. Elapsed: 16.040792371s May 23 23:48:01.103: INFO: Pod "pod-subpath-test-projected-k2wm": Phase="Running", Reason="", readiness=true. Elapsed: 18.044681359s May 23 23:48:03.108: INFO: Pod "pod-subpath-test-projected-k2wm": Phase="Running", Reason="", readiness=true. Elapsed: 20.049177487s May 23 23:48:05.112: INFO: Pod "pod-subpath-test-projected-k2wm": Phase="Running", Reason="", readiness=true. Elapsed: 22.052748775s May 23 23:48:07.116: INFO: Pod "pod-subpath-test-projected-k2wm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.057574441s STEP: Saw pod success May 23 23:48:07.116: INFO: Pod "pod-subpath-test-projected-k2wm" satisfied condition "Succeeded or Failed" May 23 23:48:07.120: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-k2wm container test-container-subpath-projected-k2wm: STEP: delete the pod May 23 23:48:07.224: INFO: Waiting for pod pod-subpath-test-projected-k2wm to disappear May 23 23:48:07.314: INFO: Pod pod-subpath-test-projected-k2wm no longer exists STEP: Deleting pod pod-subpath-test-projected-k2wm May 23 23:48:07.314: INFO: Deleting pod "pod-subpath-test-projected-k2wm" in namespace "subpath-8409" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:48:07.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8409" for this suite. • [SLOW TEST:24.407 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":288,"completed":66,"skipped":1261,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:48:07.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium May 23 23:48:07.409: INFO: Waiting up to 5m0s for pod "pod-71c3ca64-c20f-4f45-a276-9e588c6b6b40" in namespace "emptydir-5058" to be "Succeeded or Failed" May 23 23:48:07.439: INFO: Pod "pod-71c3ca64-c20f-4f45-a276-9e588c6b6b40": Phase="Pending", Reason="", readiness=false. Elapsed: 30.598193ms May 23 23:48:09.447: INFO: Pod "pod-71c3ca64-c20f-4f45-a276-9e588c6b6b40": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038180437s May 23 23:48:11.451: INFO: Pod "pod-71c3ca64-c20f-4f45-a276-9e588c6b6b40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041771185s STEP: Saw pod success May 23 23:48:11.451: INFO: Pod "pod-71c3ca64-c20f-4f45-a276-9e588c6b6b40" satisfied condition "Succeeded or Failed" May 23 23:48:11.453: INFO: Trying to get logs from node latest-worker2 pod pod-71c3ca64-c20f-4f45-a276-9e588c6b6b40 container test-container: STEP: delete the pod May 23 23:48:11.513: INFO: Waiting for pod pod-71c3ca64-c20f-4f45-a276-9e588c6b6b40 to disappear May 23 23:48:11.524: INFO: Pod pod-71c3ca64-c20f-4f45-a276-9e588c6b6b40 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:48:11.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5058" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":67,"skipped":1263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:48:11.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 23 23:48:11.593: INFO: Waiting up to 5m0s for pod "downward-api-7a210e1f-92cc-41fb-aec8-d8bfd7c81cb9" in namespace "downward-api-5419" to be "Succeeded or Failed" May 23 23:48:11.607: INFO: Pod "downward-api-7a210e1f-92cc-41fb-aec8-d8bfd7c81cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.794271ms May 23 23:48:13.668: INFO: Pod "downward-api-7a210e1f-92cc-41fb-aec8-d8bfd7c81cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074564447s May 23 23:48:15.672: INFO: Pod "downward-api-7a210e1f-92cc-41fb-aec8-d8bfd7c81cb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078613412s STEP: Saw pod success May 23 23:48:15.672: INFO: Pod "downward-api-7a210e1f-92cc-41fb-aec8-d8bfd7c81cb9" satisfied condition "Succeeded or Failed" May 23 23:48:15.675: INFO: Trying to get logs from node latest-worker pod downward-api-7a210e1f-92cc-41fb-aec8-d8bfd7c81cb9 container dapi-container: STEP: delete the pod May 23 23:48:15.695: INFO: Waiting for pod downward-api-7a210e1f-92cc-41fb-aec8-d8bfd7c81cb9 to disappear May 23 23:48:15.699: INFO: Pod downward-api-7a210e1f-92cc-41fb-aec8-d8bfd7c81cb9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:48:15.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5419" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":288,"completed":68,"skipped":1294,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:48:15.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:48:15.798: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 23 23:48:17.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5303 create -f -' May 23 23:48:23.409: INFO: stderr: "" May 23 23:48:23.409: INFO: stdout: "e2e-test-crd-publish-openapi-5707-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 23 23:48:23.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5303 delete e2e-test-crd-publish-openapi-5707-crds test-cr' May 23 23:48:23.530: INFO: stderr: "" May 23 23:48:23.530: INFO: stdout: "e2e-test-crd-publish-openapi-5707-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 23 23:48:23.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5303 apply -f -' May 23 23:48:26.908: INFO: stderr: "" May 23 23:48:26.908: INFO: stdout: "e2e-test-crd-publish-openapi-5707-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 23 23:48:26.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5303 delete e2e-test-crd-publish-openapi-5707-crds test-cr' May 23 23:48:27.009: INFO: stderr: "" May 23 23:48:27.009: INFO: stdout: "e2e-test-crd-publish-openapi-5707-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 23 23:48:27.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5707-crds' May 23 23:48:29.915: INFO: stderr: "" May 23 23:48:29.915: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5707-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:48:32.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5303" for this suite. • [SLOW TEST:17.137 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":288,"completed":69,"skipped":1294,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:48:32.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:48:32.968: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 23 23:48:37.972: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 23 23:48:37.972: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 23 23:48:38.058: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-618 /apis/apps/v1/namespaces/deployment-618/deployments/test-cleanup-deployment 7ad777c8-68bf-41fe-8925-9dab25a9a409 7145001 1 2020-05-23 23:48:37 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-23 23:48:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cbecb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 23 23:48:38.121: INFO: New ReplicaSet "test-cleanup-deployment-6688745694" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-6688745694 deployment-618 /apis/apps/v1/namespaces/deployment-618/replicasets/test-cleanup-deployment-6688745694 667ba310-0417-4514-ac35-ae641adeedfa 7145008 1 2020-05-23 23:48:37 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 7ad777c8-68bf-41fe-8925-9dab25a9a409 0xc002cbf267 0xc002cbf268}] [] [{kube-controller-manager Update apps/v1 2020-05-23 23:48:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ad777c8-68bf-41fe-8925-9dab25a9a409\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 6688745694,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cbf2f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 23 23:48:38.122: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 23 23:48:38.122: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-618 /apis/apps/v1/namespaces/deployment-618/replicasets/test-cleanup-controller 4210b9da-1802-41e7-bf8c-78f798059e7a 7145002 1 2020-05-23 23:48:32 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 7ad777c8-68bf-41fe-8925-9dab25a9a409 0xc002cbf0df 0xc002cbf100}] [] [{e2e.test Update apps/v1 2020-05-23 23:48:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-23 23:48:37 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"7ad777c8-68bf-41fe-8925-9dab25a9a409\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002cbf198 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 23 23:48:38.220: INFO: Pod "test-cleanup-controller-2zw95" is available: &Pod{ObjectMeta:{test-cleanup-controller-2zw95 test-cleanup-controller- deployment-618 /api/v1/namespaces/deployment-618/pods/test-cleanup-controller-2zw95 9ca5493f-b6dc-4171-9a74-b58660d746f2 7144990 0 2020-05-23 23:48:32 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 4210b9da-1802-41e7-bf8c-78f798059e7a 0xc002cbf987 0xc002cbf988}] [] [{kube-controller-manager Update v1 2020-05-23 23:48:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4210b9da-1802-41e7-bf8c-78f798059e7a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-23 23:48:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.92\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8s2p2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8s2p2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8s2p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 23:48:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 23:48:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 23:48:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 23:48:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.92,StartTime:2020-05-23 23:48:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-23 23:48:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://10de73b680d68ca4116a47737b97fa7cb4e83487f69f42fa5718ff91790c0356,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 23 23:48:38.220: INFO: Pod "test-cleanup-deployment-6688745694-cqf7c" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-6688745694-cqf7c test-cleanup-deployment-6688745694- deployment-618 /api/v1/namespaces/deployment-618/pods/test-cleanup-deployment-6688745694-cqf7c b9fd3bab-f642-4a37-b22a-c1a6e146ce23 7145009 0 2020-05-23 23:48:38 +0000 UTC map[name:cleanup-pod pod-template-hash:6688745694] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-6688745694 667ba310-0417-4514-ac35-ae641adeedfa 0xc002cbfc87 0xc002cbfc88}] [] [{kube-controller-manager Update v1 2020-05-23 23:48:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"667ba310-0417-4514-ac35-ae641adeedfa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8s2p2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8s2p2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8s2p2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 23:48:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:48:38.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-618" for this suite. • [SLOW TEST:5.410 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":288,"completed":70,"skipped":1297,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:48:38.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 23 23:48:38.308: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb056961-ecec-4c7f-b857-a050d2e1986c" in namespace "downward-api-9801" to be "Succeeded or Failed" May 23 23:48:38.374: INFO: Pod "downwardapi-volume-cb056961-ecec-4c7f-b857-a050d2e1986c": Phase="Pending", Reason="", readiness=false. Elapsed: 66.663562ms May 23 23:48:40.620: INFO: Pod "downwardapi-volume-cb056961-ecec-4c7f-b857-a050d2e1986c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312189918s May 23 23:48:42.624: INFO: Pod "downwardapi-volume-cb056961-ecec-4c7f-b857-a050d2e1986c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.316111662s STEP: Saw pod success May 23 23:48:42.624: INFO: Pod "downwardapi-volume-cb056961-ecec-4c7f-b857-a050d2e1986c" satisfied condition "Succeeded or Failed" May 23 23:48:42.627: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-cb056961-ecec-4c7f-b857-a050d2e1986c container client-container: STEP: delete the pod May 23 23:48:42.661: INFO: Waiting for pod downwardapi-volume-cb056961-ecec-4c7f-b857-a050d2e1986c to disappear May 23 23:48:42.669: INFO: Pod downwardapi-volume-cb056961-ecec-4c7f-b857-a050d2e1986c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:48:42.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9801" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":71,"skipped":1317,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:48:42.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2140 STEP: creating a selector STEP: Creating the service pods in kubernetes May 23 23:48:43.004: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 23 23:48:43.116: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 23 23:48:45.215: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 23 23:48:47.120: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 23 23:48:49.119: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:48:51.120: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:48:53.129: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:48:55.119: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:48:57.120: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:48:59.135: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:49:01.123: INFO: The status of Pod netserver-0 is Running (Ready = false) May 23 23:49:03.135: INFO: The status of Pod netserver-0 is Running (Ready = true) May 23 23:49:03.141: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 23 23:49:07.211: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.93:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2140 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 23 23:49:07.211: INFO: >>> kubeConfig: /root/.kube/config I0523 23:49:07.244044 7 log.go:172] (0xc00109c4d0) (0xc002340dc0) Create stream I0523 23:49:07.244082 7 log.go:172] (0xc00109c4d0) (0xc002340dc0) Stream added, broadcasting: 1 I0523 23:49:07.246537 7 log.go:172] (0xc00109c4d0) Reply frame received for 1 I0523 23:49:07.246572 7 log.go:172] (0xc00109c4d0) (0xc002340e60) Create stream I0523 23:49:07.246585 7 log.go:172] (0xc00109c4d0) (0xc002340e60) Stream added, broadcasting: 3 I0523 23:49:07.247559 7 log.go:172] (0xc00109c4d0) Reply frame received for 3 I0523 23:49:07.247583 7 log.go:172] (0xc00109c4d0) (0xc001307cc0) Create stream I0523 23:49:07.247605 7 log.go:172] (0xc00109c4d0) (0xc001307cc0) Stream added, broadcasting: 5 I0523 23:49:07.248617 7 log.go:172] (0xc00109c4d0) Reply frame received for 5 I0523 23:49:07.380730 7 log.go:172] (0xc00109c4d0) Data frame received for 3 I0523 23:49:07.380756 7 log.go:172] (0xc002340e60) (3) Data frame handling I0523 23:49:07.380774 7 log.go:172] (0xc002340e60) (3) Data frame sent I0523 23:49:07.380905 7 log.go:172] (0xc00109c4d0) Data frame received for 3 I0523 23:49:07.380925 7 log.go:172] (0xc002340e60) (3) Data frame handling I0523 23:49:07.381054 7 log.go:172] (0xc00109c4d0) Data frame received for 5 I0523 23:49:07.381064 7 log.go:172] (0xc001307cc0) (5) Data frame handling I0523 23:49:07.383166 7 log.go:172] (0xc00109c4d0) Data frame received for 1 I0523 23:49:07.383182 7 log.go:172] (0xc002340dc0) (1) Data frame handling I0523 23:49:07.383191 7 log.go:172] (0xc002340dc0) (1) Data frame sent I0523 23:49:07.383203 7 log.go:172] (0xc00109c4d0) (0xc002340dc0) Stream removed, broadcasting: 1 I0523 23:49:07.383305 7 log.go:172] (0xc00109c4d0) (0xc002340dc0) Stream removed, broadcasting: 1 I0523 23:49:07.383319 7 log.go:172] (0xc00109c4d0) (0xc002340e60) Stream removed, broadcasting: 3 I0523 23:49:07.383330 7 log.go:172] (0xc00109c4d0) (0xc001307cc0) Stream removed, broadcasting: 5 May 23 23:49:07.383: INFO: Found all expected endpoints: [netserver-0] I0523 23:49:07.383638 7 log.go:172] (0xc00109c4d0) Go away received May 23 23:49:07.388: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.77:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2140 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 23 23:49:07.388: INFO: >>> kubeConfig: /root/.kube/config I0523 23:49:07.463319 7 log.go:172] (0xc00109ca50) (0xc002341220) Create stream I0523 23:49:07.463360 7 log.go:172] (0xc00109ca50) (0xc002341220) Stream added, broadcasting: 1 I0523 23:49:07.465735 7 log.go:172] (0xc00109ca50) Reply frame received for 1 I0523 23:49:07.465762 7 log.go:172] (0xc00109ca50) (0xc0023412c0) Create stream I0523 23:49:07.465772 7 log.go:172] (0xc00109ca50) (0xc0023412c0) Stream added, broadcasting: 3 I0523 23:49:07.466542 7 log.go:172] (0xc00109ca50) Reply frame received for 3 I0523 23:49:07.466564 7 log.go:172] (0xc00109ca50) (0xc001f10000) Create stream I0523 23:49:07.466574 7 log.go:172] (0xc00109ca50) (0xc001f10000) Stream added, broadcasting: 5 I0523 23:49:07.467367 7 log.go:172] (0xc00109ca50) Reply frame received for 5 I0523 23:49:07.558570 7 log.go:172] (0xc00109ca50) Data frame received for 5 I0523 23:49:07.558600 7 log.go:172] (0xc001f10000) (5) Data frame handling I0523 23:49:07.558623 7 log.go:172] (0xc00109ca50) Data frame received for 3 I0523 23:49:07.558634 7 log.go:172] (0xc0023412c0) (3) Data frame handling I0523 23:49:07.558642 7 log.go:172] (0xc0023412c0) (3) Data frame sent I0523 23:49:07.558649 7 log.go:172] (0xc00109ca50) Data frame received for 3 I0523 23:49:07.558655 7 log.go:172] (0xc0023412c0) (3) Data frame handling I0523 23:49:07.559641 7 log.go:172] (0xc00109ca50) Data frame received for 1 I0523 23:49:07.559687 7 log.go:172] (0xc002341220) (1) Data frame handling I0523 23:49:07.559704 7 log.go:172] (0xc002341220) (1) Data frame sent I0523 23:49:07.559727 7 log.go:172] (0xc00109ca50) (0xc002341220) Stream removed, broadcasting: 1 I0523 23:49:07.559741 7 log.go:172] (0xc00109ca50) Go away received I0523 23:49:07.559800 7 log.go:172] (0xc00109ca50) (0xc002341220) Stream removed, broadcasting: 1 I0523 23:49:07.559820 7 log.go:172] (0xc00109ca50) (0xc0023412c0) Stream removed, broadcasting: 3 I0523 23:49:07.559827 7 log.go:172] (0xc00109ca50) (0xc001f10000) Stream removed, broadcasting: 5 May 23 23:49:07.559: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:49:07.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2140" for this suite. • [SLOW TEST:24.867 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":72,"skipped":1332,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:49:07.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 23 23:49:08.234: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 23 23:49:10.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874548, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874548, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874548, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874548, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 23 23:49:12.284: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874548, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874548, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874548, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874548, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 23 23:49:15.790: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:49:16.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-894" for this suite. STEP: Destroying namespace "webhook-894-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.549 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":288,"completed":73,"skipped":1341,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:49:16.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:51:16.275: INFO: Deleting pod "var-expansion-b0aed34f-934d-4bd7-b817-1fe8facf10d1" in namespace "var-expansion-796" May 23 23:51:16.279: INFO: Wait up to 5m0s for pod "var-expansion-b0aed34f-934d-4bd7-b817-1fe8facf10d1" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:51:20.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-796" for this suite. • [SLOW TEST:124.198 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":288,"completed":74,"skipped":1367,"failed":0} [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:51:20.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-6927 STEP: creating replication controller nodeport-test in namespace services-6927 I0523 23:51:20.497313 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6927, replica count: 2 I0523 23:51:23.547803 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0523 23:51:26.548035 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 23 23:51:26.548: INFO: Creating new exec pod May 23 23:51:31.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6927 execpodvnkp5 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 23 23:51:31.804: INFO: stderr: "I0523 23:51:31.697450 510 log.go:172] (0xc000a61600) (0xc0007054a0) Create stream\nI0523 23:51:31.697518 510 log.go:172] (0xc000a61600) (0xc0007054a0) Stream added, broadcasting: 1\nI0523 23:51:31.699710 510 log.go:172] (0xc000a61600) Reply frame received for 1\nI0523 23:51:31.699761 510 log.go:172] (0xc000a61600) (0xc000705ea0) Create stream\nI0523 23:51:31.699777 510 log.go:172] (0xc000a61600) (0xc000705ea0) Stream added, broadcasting: 3\nI0523 23:51:31.700701 510 log.go:172] (0xc000a61600) Reply frame received for 3\nI0523 23:51:31.700744 510 log.go:172] (0xc000a61600) (0xc00083ce60) Create stream\nI0523 23:51:31.700769 510 log.go:172] (0xc000a61600) (0xc00083ce60) Stream added, broadcasting: 5\nI0523 23:51:31.701761 510 log.go:172] (0xc000a61600) Reply frame received for 5\nI0523 23:51:31.781992 510 log.go:172] (0xc000a61600) Data frame received for 5\nI0523 23:51:31.782023 510 log.go:172] (0xc00083ce60) (5) Data frame handling\nI0523 23:51:31.782035 510 log.go:172] (0xc00083ce60) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0523 23:51:31.796443 510 log.go:172] (0xc000a61600) Data frame received for 5\nI0523 23:51:31.796460 510 log.go:172] (0xc00083ce60) (5) Data frame handling\nI0523 23:51:31.796469 510 log.go:172] (0xc00083ce60) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0523 23:51:31.796582 510 log.go:172] (0xc000a61600) Data frame received for 5\nI0523 23:51:31.796596 510 log.go:172] (0xc00083ce60) (5) Data frame handling\nI0523 23:51:31.797397 510 log.go:172] (0xc000a61600) Data frame received for 3\nI0523 23:51:31.797436 510 log.go:172] (0xc000705ea0) (3) Data frame handling\nI0523 23:51:31.799084 510 log.go:172] (0xc000a61600) Data frame received for 1\nI0523 23:51:31.799106 510 log.go:172] (0xc0007054a0) (1) Data frame handling\nI0523 23:51:31.799120 510 log.go:172] (0xc0007054a0) (1) Data frame sent\nI0523 23:51:31.799135 510 log.go:172] (0xc000a61600) (0xc0007054a0) Stream removed, broadcasting: 1\nI0523 23:51:31.799222 510 log.go:172] (0xc000a61600) Go away received\nI0523 23:51:31.799441 510 log.go:172] (0xc000a61600) (0xc0007054a0) Stream removed, broadcasting: 1\nI0523 23:51:31.799460 510 log.go:172] (0xc000a61600) (0xc000705ea0) Stream removed, broadcasting: 3\nI0523 23:51:31.799469 510 log.go:172] (0xc000a61600) (0xc00083ce60) Stream removed, broadcasting: 5\n" May 23 23:51:31.805: INFO: stdout: "" May 23 23:51:31.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6927 execpodvnkp5 -- /bin/sh -x -c nc -zv -t -w 2 10.104.180.202 80' May 23 23:51:32.025: INFO: stderr: "I0523 23:51:31.945527 530 log.go:172] (0xc000a09550) (0xc0009523c0) Create stream\nI0523 23:51:31.945591 530 log.go:172] (0xc000a09550) (0xc0009523c0) Stream added, broadcasting: 1\nI0523 23:51:31.951657 530 log.go:172] (0xc000a09550) Reply frame received for 1\nI0523 23:51:31.951708 530 log.go:172] (0xc000a09550) (0xc000713040) Create stream\nI0523 23:51:31.951729 530 log.go:172] (0xc000a09550) (0xc000713040) Stream added, broadcasting: 3\nI0523 23:51:31.952686 530 log.go:172] (0xc000a09550) Reply frame received for 3\nI0523 23:51:31.952712 530 log.go:172] (0xc000a09550) (0xc0005ba640) Create stream\nI0523 23:51:31.952725 530 log.go:172] (0xc000a09550) (0xc0005ba640) Stream added, broadcasting: 5\nI0523 23:51:31.953999 530 log.go:172] (0xc000a09550) Reply frame received for 5\nI0523 23:51:32.019211 530 log.go:172] (0xc000a09550) Data frame received for 3\nI0523 23:51:32.019244 530 log.go:172] (0xc000713040) (3) Data frame handling\nI0523 23:51:32.019263 530 log.go:172] (0xc000a09550) Data frame received for 5\nI0523 23:51:32.019272 530 log.go:172] (0xc0005ba640) (5) Data frame handling\nI0523 23:51:32.019281 530 log.go:172] (0xc0005ba640) (5) Data frame sent\nI0523 23:51:32.019287 530 log.go:172] (0xc000a09550) Data frame received for 5\nI0523 23:51:32.019292 530 log.go:172] (0xc0005ba640) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.180.202 80\nConnection to 10.104.180.202 80 port [tcp/http] succeeded!\nI0523 23:51:32.020272 530 log.go:172] (0xc000a09550) Data frame received for 1\nI0523 23:51:32.020297 530 log.go:172] (0xc0009523c0) (1) Data frame handling\nI0523 23:51:32.020306 530 log.go:172] (0xc0009523c0) (1) Data frame sent\nI0523 23:51:32.020318 530 log.go:172] (0xc000a09550) (0xc0009523c0) Stream removed, broadcasting: 1\nI0523 23:51:32.020339 530 log.go:172] (0xc000a09550) Go away received\nI0523 23:51:32.020634 530 log.go:172] (0xc000a09550) (0xc0009523c0) Stream removed, broadcasting: 1\nI0523 23:51:32.020657 530 log.go:172] (0xc000a09550) (0xc000713040) Stream removed, broadcasting: 3\nI0523 23:51:32.020668 530 log.go:172] (0xc000a09550) (0xc0005ba640) Stream removed, broadcasting: 5\n" May 23 23:51:32.025: INFO: stdout: "" May 23 23:51:32.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6927 execpodvnkp5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31646' May 23 23:51:32.221: INFO: stderr: "I0523 23:51:32.150825 549 log.go:172] (0xc000b8e000) (0xc0000c26e0) Create stream\nI0523 23:51:32.150877 549 log.go:172] (0xc000b8e000) (0xc0000c26e0) Stream added, broadcasting: 1\nI0523 23:51:32.153598 549 log.go:172] (0xc000b8e000) Reply frame received for 1\nI0523 23:51:32.153649 549 log.go:172] (0xc000b8e000) (0xc000329ae0) Create stream\nI0523 23:51:32.153684 549 log.go:172] (0xc000b8e000) (0xc000329ae0) Stream added, broadcasting: 3\nI0523 23:51:32.154713 549 log.go:172] (0xc000b8e000) Reply frame received for 3\nI0523 23:51:32.154752 549 log.go:172] (0xc000b8e000) (0xc0000c2e60) Create stream\nI0523 23:51:32.154764 549 log.go:172] (0xc000b8e000) (0xc0000c2e60) Stream added, broadcasting: 5\nI0523 23:51:32.155957 549 log.go:172] (0xc000b8e000) Reply frame received for 5\nI0523 23:51:32.211731 549 log.go:172] (0xc000b8e000) Data frame received for 5\nI0523 23:51:32.211814 549 log.go:172] (0xc0000c2e60) (5) Data frame handling\nI0523 23:51:32.211852 549 log.go:172] (0xc0000c2e60) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31646\nConnection to 172.17.0.13 31646 port [tcp/31646] succeeded!\nI0523 23:51:32.212089 549 log.go:172] (0xc000b8e000) Data frame received for 5\nI0523 23:51:32.212240 549 log.go:172] (0xc0000c2e60) (5) Data frame handling\nI0523 23:51:32.212302 549 log.go:172] (0xc000b8e000) Data frame received for 3\nI0523 23:51:32.212326 549 log.go:172] (0xc000329ae0) (3) Data frame handling\nI0523 23:51:32.216603 549 log.go:172] (0xc000b8e000) Data frame received for 1\nI0523 23:51:32.216730 549 log.go:172] (0xc0000c26e0) (1) Data frame handling\nI0523 23:51:32.216841 549 log.go:172] (0xc0000c26e0) (1) Data frame sent\nI0523 23:51:32.216902 549 log.go:172] (0xc000b8e000) (0xc0000c26e0) Stream removed, broadcasting: 1\nI0523 23:51:32.216926 549 log.go:172] (0xc000b8e000) Go away received\nI0523 23:51:32.217579 549 log.go:172] (0xc000b8e000) (0xc0000c26e0) Stream removed, broadcasting: 1\nI0523 23:51:32.217606 549 log.go:172] (0xc000b8e000) (0xc000329ae0) Stream removed, broadcasting: 3\nI0523 23:51:32.217619 549 log.go:172] (0xc000b8e000) (0xc0000c2e60) Stream removed, broadcasting: 5\n" May 23 23:51:32.222: INFO: stdout: "" May 23 23:51:32.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6927 execpodvnkp5 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31646' May 23 23:51:32.426: INFO: stderr: "I0523 23:51:32.347471 569 log.go:172] (0xc0009eb3f0) (0xc0006daf00) Create stream\nI0523 23:51:32.347537 569 log.go:172] (0xc0009eb3f0) (0xc0006daf00) Stream added, broadcasting: 1\nI0523 23:51:32.351177 569 log.go:172] (0xc0009eb3f0) Reply frame received for 1\nI0523 23:51:32.351242 569 log.go:172] (0xc0009eb3f0) (0xc000660dc0) Create stream\nI0523 23:51:32.351257 569 log.go:172] (0xc0009eb3f0) (0xc000660dc0) Stream added, broadcasting: 3\nI0523 23:51:32.353719 569 log.go:172] (0xc0009eb3f0) Reply frame received for 3\nI0523 23:51:32.353746 569 log.go:172] (0xc0009eb3f0) (0xc00025eaa0) Create stream\nI0523 23:51:32.353754 569 log.go:172] (0xc0009eb3f0) (0xc00025eaa0) Stream added, broadcasting: 5\nI0523 23:51:32.354738 569 log.go:172] (0xc0009eb3f0) Reply frame received for 5\nI0523 23:51:32.410981 569 log.go:172] (0xc0009eb3f0) Data frame received for 3\nI0523 23:51:32.411004 569 log.go:172] (0xc000660dc0) (3) Data frame handling\nI0523 23:51:32.411590 569 log.go:172] (0xc0009eb3f0) Data frame received for 5\nI0523 23:51:32.411610 569 log.go:172] (0xc00025eaa0) (5) Data frame handling\nI0523 23:51:32.411621 569 log.go:172] (0xc00025eaa0) (5) Data frame sent\nI0523 23:51:32.411634 569 log.go:172] (0xc0009eb3f0) Data frame received for 5\nI0523 23:51:32.411644 569 log.go:172] (0xc00025eaa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31646\nConnection to 172.17.0.12 31646 port [tcp/31646] succeeded!\nI0523 23:51:32.413857 569 log.go:172] (0xc0009eb3f0) Data frame received for 1\nI0523 23:51:32.413870 569 log.go:172] (0xc0006daf00) (1) Data frame handling\nI0523 23:51:32.413876 569 log.go:172] (0xc0006daf00) (1) Data frame sent\nI0523 23:51:32.413985 569 log.go:172] (0xc0009eb3f0) (0xc0006daf00) Stream removed, broadcasting: 1\nI0523 23:51:32.414028 569 log.go:172] (0xc0009eb3f0) Go away received\nI0523 23:51:32.414215 569 log.go:172] (0xc0009eb3f0) (0xc0006daf00) Stream removed, broadcasting: 1\nI0523 23:51:32.414233 569 log.go:172] (0xc0009eb3f0) (0xc000660dc0) Stream removed, broadcasting: 3\nI0523 23:51:32.414240 569 log.go:172] (0xc0009eb3f0) (0xc00025eaa0) Stream removed, broadcasting: 5\n" May 23 23:51:32.426: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:51:32.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6927" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.122 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":288,"completed":75,"skipped":1367,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:51:32.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 23 23:51:32.520: INFO: Waiting up to 5m0s for pod "pod-948f5ade-ffdc-4627-a365-80352aaff9a2" in namespace "emptydir-3072" to be "Succeeded or Failed" May 23 23:51:32.536: INFO: Pod "pod-948f5ade-ffdc-4627-a365-80352aaff9a2": Phase="Pending", Reason="", readiness=false. Elapsed: 15.90258ms May 23 23:51:34.615: INFO: Pod "pod-948f5ade-ffdc-4627-a365-80352aaff9a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095057357s May 23 23:51:36.620: INFO: Pod "pod-948f5ade-ffdc-4627-a365-80352aaff9a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099519397s STEP: Saw pod success May 23 23:51:36.620: INFO: Pod "pod-948f5ade-ffdc-4627-a365-80352aaff9a2" satisfied condition "Succeeded or Failed" May 23 23:51:36.623: INFO: Trying to get logs from node latest-worker2 pod pod-948f5ade-ffdc-4627-a365-80352aaff9a2 container test-container: STEP: delete the pod May 23 23:51:36.755: INFO: Waiting for pod pod-948f5ade-ffdc-4627-a365-80352aaff9a2 to disappear May 23 23:51:36.819: INFO: Pod pod-948f5ade-ffdc-4627-a365-80352aaff9a2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:51:36.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3072" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":76,"skipped":1371,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:51:36.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-586 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-586 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-586 May 23 23:51:36.940: INFO: Found 0 stateful pods, waiting for 1 May 23 23:51:46.945: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 23 23:51:46.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-586 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 23 23:51:47.223: INFO: stderr: "I0523 23:51:47.089780 591 log.go:172] (0xc000a1b6b0) (0xc000840e60) Create stream\nI0523 23:51:47.089827 591 log.go:172] (0xc000a1b6b0) (0xc000840e60) Stream added, broadcasting: 1\nI0523 23:51:47.094775 591 log.go:172] (0xc000a1b6b0) Reply frame received for 1\nI0523 23:51:47.094818 591 log.go:172] (0xc000a1b6b0) (0xc00083b4a0) Create stream\nI0523 23:51:47.094829 591 log.go:172] (0xc000a1b6b0) (0xc00083b4a0) Stream added, broadcasting: 3\nI0523 23:51:47.095813 591 log.go:172] (0xc000a1b6b0) Reply frame received for 3\nI0523 23:51:47.095852 591 log.go:172] (0xc000a1b6b0) (0xc0006d6c80) Create stream\nI0523 23:51:47.095862 591 log.go:172] (0xc000a1b6b0) (0xc0006d6c80) Stream added, broadcasting: 5\nI0523 23:51:47.096795 591 log.go:172] (0xc000a1b6b0) Reply frame received for 5\nI0523 23:51:47.181798 591 log.go:172] (0xc000a1b6b0) Data frame received for 5\nI0523 23:51:47.181833 591 log.go:172] (0xc0006d6c80) (5) Data frame handling\nI0523 23:51:47.181863 591 log.go:172] (0xc0006d6c80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0523 23:51:47.216608 591 log.go:172] (0xc000a1b6b0) Data frame received for 5\nI0523 23:51:47.216640 591 log.go:172] (0xc0006d6c80) (5) Data frame handling\nI0523 23:51:47.216672 591 log.go:172] (0xc000a1b6b0) Data frame received for 3\nI0523 23:51:47.216681 591 log.go:172] (0xc00083b4a0) (3) Data frame handling\nI0523 23:51:47.216692 591 log.go:172] (0xc00083b4a0) (3) Data frame sent\nI0523 23:51:47.216702 591 log.go:172] (0xc000a1b6b0) Data frame received for 3\nI0523 23:51:47.216712 591 log.go:172] (0xc00083b4a0) (3) Data frame handling\nI0523 23:51:47.218595 591 log.go:172] (0xc000a1b6b0) Data frame received for 1\nI0523 23:51:47.218619 591 log.go:172] (0xc000840e60) (1) Data frame handling\nI0523 23:51:47.218629 591 log.go:172] (0xc000840e60) (1) Data frame sent\nI0523 23:51:47.218641 591 log.go:172] (0xc000a1b6b0) (0xc000840e60) Stream removed, broadcasting: 1\nI0523 23:51:47.218655 591 log.go:172] (0xc000a1b6b0) Go away received\nI0523 23:51:47.219035 591 log.go:172] (0xc000a1b6b0) (0xc000840e60) Stream removed, broadcasting: 1\nI0523 23:51:47.219049 591 log.go:172] (0xc000a1b6b0) (0xc00083b4a0) Stream removed, broadcasting: 3\nI0523 23:51:47.219054 591 log.go:172] (0xc000a1b6b0) (0xc0006d6c80) Stream removed, broadcasting: 5\n" May 23 23:51:47.223: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 23 23:51:47.223: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 23 23:51:47.227: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 23 23:51:57.230: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 23 23:51:57.230: INFO: Waiting for statefulset status.replicas updated to 0 May 23 23:51:57.262: INFO: POD NODE PHASE GRACE CONDITIONS May 23 23:51:57.262: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:47 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:36 +0000 UTC }] May 23 23:51:57.262: INFO: May 23 23:51:57.262: INFO: StatefulSet ss has not reached scale 3, at 1 May 23 23:51:58.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976992665s May 23 23:51:59.274: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.971167378s May 23 23:52:00.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.965310063s May 23 23:52:01.328: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.96032829s May 23 23:52:02.332: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.910991s May 23 23:52:03.338: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.90669176s May 23 23:52:04.346: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.901472842s May 23 23:52:05.351: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.893087664s May 23 23:52:06.364: INFO: Verifying statefulset ss doesn't scale past 3 for another 887.677876ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-586 May 23 23:52:07.369: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-586 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 23 23:52:07.585: INFO: stderr: "I0523 23:52:07.512655 610 log.go:172] (0xc0006a09a0) (0xc00013ba40) Create stream\nI0523 23:52:07.512725 610 log.go:172] (0xc0006a09a0) (0xc00013ba40) Stream added, broadcasting: 1\nI0523 23:52:07.515232 610 log.go:172] (0xc0006a09a0) Reply frame received for 1\nI0523 23:52:07.515265 610 log.go:172] (0xc0006a09a0) (0xc000650c80) Create stream\nI0523 23:52:07.515272 610 log.go:172] (0xc0006a09a0) (0xc000650c80) Stream added, broadcasting: 3\nI0523 23:52:07.516095 610 log.go:172] (0xc0006a09a0) Reply frame received for 3\nI0523 23:52:07.516135 610 log.go:172] (0xc0006a09a0) (0xc000588500) Create stream\nI0523 23:52:07.516152 610 log.go:172] (0xc0006a09a0) (0xc000588500) Stream added, broadcasting: 5\nI0523 23:52:07.517291 610 log.go:172] (0xc0006a09a0) Reply frame received for 5\nI0523 23:52:07.579313 610 log.go:172] (0xc0006a09a0) Data frame received for 3\nI0523 23:52:07.579353 610 log.go:172] (0xc000650c80) (3) Data frame handling\nI0523 23:52:07.579368 610 log.go:172] (0xc000650c80) (3) Data frame sent\nI0523 23:52:07.579394 610 log.go:172] (0xc0006a09a0) Data frame received for 5\nI0523 23:52:07.579404 610 log.go:172] (0xc000588500) (5) Data frame handling\nI0523 23:52:07.579419 610 log.go:172] (0xc000588500) (5) Data frame sent\nI0523 23:52:07.579429 610 log.go:172] (0xc0006a09a0) Data frame received for 5\nI0523 23:52:07.579448 610 log.go:172] (0xc000588500) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0523 23:52:07.579541 610 log.go:172] (0xc0006a09a0) Data frame received for 3\nI0523 23:52:07.579576 610 log.go:172] (0xc000650c80) (3) Data frame handling\nI0523 23:52:07.580747 610 log.go:172] (0xc0006a09a0) Data frame received for 1\nI0523 23:52:07.580778 610 log.go:172] (0xc00013ba40) (1) Data frame handling\nI0523 23:52:07.580812 610 log.go:172] (0xc00013ba40) (1) Data frame sent\nI0523 23:52:07.580835 610 log.go:172] (0xc0006a09a0) (0xc00013ba40) Stream removed, broadcasting: 1\nI0523 23:52:07.580910 610 log.go:172] (0xc0006a09a0) Go away received\nI0523 23:52:07.581402 610 log.go:172] (0xc0006a09a0) (0xc00013ba40) Stream removed, broadcasting: 1\nI0523 23:52:07.581425 610 log.go:172] (0xc0006a09a0) (0xc000650c80) Stream removed, broadcasting: 3\nI0523 23:52:07.581436 610 log.go:172] (0xc0006a09a0) (0xc000588500) Stream removed, broadcasting: 5\n" May 23 23:52:07.585: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 23 23:52:07.585: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 23 23:52:07.585: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-586 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 23 23:52:07.831: INFO: stderr: "I0523 23:52:07.774284 631 log.go:172] (0xc0009f4fd0) (0xc000ae0500) Create stream\nI0523 23:52:07.774338 631 log.go:172] (0xc0009f4fd0) (0xc000ae0500) Stream added, broadcasting: 1\nI0523 23:52:07.775840 631 log.go:172] (0xc0009f4fd0) Reply frame received for 1\nI0523 23:52:07.775862 631 log.go:172] (0xc0009f4fd0) (0xc0009c0000) Create stream\nI0523 23:52:07.775868 631 log.go:172] (0xc0009f4fd0) (0xc0009c0000) Stream added, broadcasting: 3\nI0523 23:52:07.776555 631 log.go:172] (0xc0009f4fd0) Reply frame received for 3\nI0523 23:52:07.776587 631 log.go:172] (0xc0009f4fd0) (0xc0009c0140) Create stream\nI0523 23:52:07.776599 631 log.go:172] (0xc0009f4fd0) (0xc0009c0140) Stream added, broadcasting: 5\nI0523 23:52:07.777309 631 log.go:172] (0xc0009f4fd0) Reply frame received for 5\nI0523 23:52:07.823772 631 log.go:172] (0xc0009f4fd0) Data frame received for 3\nI0523 23:52:07.823816 631 log.go:172] (0xc0009c0000) (3) Data frame handling\nI0523 23:52:07.823832 631 log.go:172] (0xc0009c0000) (3) Data frame sent\nI0523 23:52:07.823841 631 log.go:172] (0xc0009f4fd0) Data frame received for 3\nI0523 23:52:07.823849 631 log.go:172] (0xc0009c0000) (3) Data frame handling\nI0523 23:52:07.823905 631 log.go:172] (0xc0009f4fd0) Data frame received for 5\nI0523 23:52:07.823930 631 log.go:172] (0xc0009c0140) (5) Data frame handling\nI0523 23:52:07.823949 631 log.go:172] (0xc0009c0140) (5) Data frame sent\nI0523 23:52:07.823960 631 log.go:172] (0xc0009f4fd0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0523 23:52:07.823969 631 log.go:172] (0xc0009c0140) (5) Data frame handling\nI0523 23:52:07.825311 631 log.go:172] (0xc0009f4fd0) Data frame received for 1\nI0523 23:52:07.825336 631 log.go:172] (0xc000ae0500) (1) Data frame handling\nI0523 23:52:07.825357 631 log.go:172] (0xc000ae0500) (1) Data frame sent\nI0523 23:52:07.825494 631 log.go:172] (0xc0009f4fd0) (0xc000ae0500) Stream removed, broadcasting: 1\nI0523 23:52:07.825786 631 log.go:172] (0xc0009f4fd0) (0xc000ae0500) Stream removed, broadcasting: 1\nI0523 23:52:07.825804 631 log.go:172] (0xc0009f4fd0) (0xc0009c0000) Stream removed, broadcasting: 3\nI0523 23:52:07.825930 631 log.go:172] (0xc0009f4fd0) (0xc0009c0140) Stream removed, broadcasting: 5\n" May 23 23:52:07.831: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 23 23:52:07.831: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 23 23:52:07.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-586 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 23 23:52:08.042: INFO: stderr: "I0523 23:52:07.956100 651 log.go:172] (0xc0003e5340) (0xc000a2e500) Create stream\nI0523 23:52:07.956156 651 log.go:172] (0xc0003e5340) (0xc000a2e500) Stream added, broadcasting: 1\nI0523 23:52:07.960920 651 log.go:172] (0xc0003e5340) Reply frame received for 1\nI0523 23:52:07.960964 651 log.go:172] (0xc0003e5340) (0xc000542280) Create stream\nI0523 23:52:07.960980 651 log.go:172] (0xc0003e5340) (0xc000542280) Stream added, broadcasting: 3\nI0523 23:52:07.962107 651 log.go:172] (0xc0003e5340) Reply frame received for 3\nI0523 23:52:07.962145 651 log.go:172] (0xc0003e5340) (0xc000502280) Create stream\nI0523 23:52:07.962156 651 log.go:172] (0xc0003e5340) (0xc000502280) Stream added, broadcasting: 5\nI0523 23:52:07.962858 651 log.go:172] (0xc0003e5340) Reply frame received for 5\nI0523 23:52:08.034968 651 log.go:172] (0xc0003e5340) Data frame received for 5\nI0523 23:52:08.034989 651 log.go:172] (0xc000502280) (5) Data frame handling\nI0523 23:52:08.034999 651 log.go:172] (0xc000502280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0523 23:52:08.035378 651 log.go:172] (0xc0003e5340) Data frame received for 5\nI0523 23:52:08.035425 651 log.go:172] (0xc000502280) (5) Data frame handling\nI0523 23:52:08.035453 651 log.go:172] (0xc0003e5340) Data frame received for 3\nI0523 23:52:08.035481 651 log.go:172] (0xc000542280) (3) Data frame handling\nI0523 23:52:08.035497 651 log.go:172] (0xc000542280) (3) Data frame sent\nI0523 23:52:08.035506 651 log.go:172] (0xc0003e5340) Data frame received for 3\nI0523 23:52:08.035513 651 log.go:172] (0xc000542280) (3) Data frame handling\nI0523 23:52:08.036625 651 log.go:172] (0xc0003e5340) Data frame received for 1\nI0523 23:52:08.036651 651 log.go:172] (0xc000a2e500) (1) Data frame handling\nI0523 23:52:08.036666 651 log.go:172] (0xc000a2e500) (1) Data frame sent\nI0523 23:52:08.036684 651 log.go:172] (0xc0003e5340) (0xc000a2e500) Stream removed, broadcasting: 1\nI0523 23:52:08.036728 651 log.go:172] (0xc0003e5340) Go away received\nI0523 23:52:08.037197 651 log.go:172] (0xc0003e5340) (0xc000a2e500) Stream removed, broadcasting: 1\nI0523 23:52:08.037210 651 log.go:172] (0xc0003e5340) (0xc000542280) Stream removed, broadcasting: 3\nI0523 23:52:08.037215 651 log.go:172] (0xc0003e5340) (0xc000502280) Stream removed, broadcasting: 5\n" May 23 23:52:08.043: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 23 23:52:08.043: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 23 23:52:08.047: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 23 23:52:08.047: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 23 23:52:08.047: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 23 23:52:08.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-586 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 23 23:52:08.252: INFO: stderr: "I0523 23:52:08.184934 671 log.go:172] (0xc000675080) (0xc000982780) Create stream\nI0523 23:52:08.184994 671 log.go:172] (0xc000675080) (0xc000982780) Stream added, broadcasting: 1\nI0523 23:52:08.190011 671 log.go:172] (0xc000675080) Reply frame received for 1\nI0523 23:52:08.190064 671 log.go:172] (0xc000675080) (0xc000982000) Create stream\nI0523 23:52:08.190080 671 log.go:172] (0xc000675080) (0xc000982000) Stream added, broadcasting: 3\nI0523 23:52:08.191088 671 log.go:172] (0xc000675080) Reply frame received for 3\nI0523 23:52:08.191138 671 log.go:172] (0xc000675080) (0xc0009820a0) Create stream\nI0523 23:52:08.191166 671 log.go:172] (0xc000675080) (0xc0009820a0) Stream added, broadcasting: 5\nI0523 23:52:08.192076 671 log.go:172] (0xc000675080) Reply frame received for 5\nI0523 23:52:08.244407 671 log.go:172] (0xc000675080) Data frame received for 5\nI0523 23:52:08.244452 671 log.go:172] (0xc0009820a0) (5) Data frame handling\nI0523 23:52:08.244468 671 log.go:172] (0xc0009820a0) (5) Data frame sent\nI0523 23:52:08.244479 671 log.go:172] (0xc000675080) Data frame received for 5\nI0523 23:52:08.244488 671 log.go:172] (0xc0009820a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0523 23:52:08.244511 671 log.go:172] (0xc000675080) Data frame received for 3\nI0523 23:52:08.244525 671 log.go:172] (0xc000982000) (3) Data frame handling\nI0523 23:52:08.244538 671 log.go:172] (0xc000982000) (3) Data frame sent\nI0523 23:52:08.244554 671 log.go:172] (0xc000675080) Data frame received for 3\nI0523 23:52:08.244584 671 log.go:172] (0xc000982000) (3) Data frame handling\nI0523 23:52:08.246613 671 log.go:172] (0xc000675080) Data frame received for 1\nI0523 23:52:08.246634 671 log.go:172] (0xc000982780) (1) Data frame handling\nI0523 23:52:08.246643 671 log.go:172] (0xc000982780) (1) Data frame sent\nI0523 23:52:08.246656 671 log.go:172] (0xc000675080) (0xc000982780) Stream removed, broadcasting: 1\nI0523 23:52:08.246729 671 log.go:172] (0xc000675080) Go away received\nI0523 23:52:08.246957 671 log.go:172] (0xc000675080) (0xc000982780) Stream removed, broadcasting: 1\nI0523 23:52:08.246975 671 log.go:172] (0xc000675080) (0xc000982000) Stream removed, broadcasting: 3\nI0523 23:52:08.246983 671 log.go:172] (0xc000675080) (0xc0009820a0) Stream removed, broadcasting: 5\n" May 23 23:52:08.252: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 23 23:52:08.252: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 23 23:52:08.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-586 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 23 23:52:08.528: INFO: stderr: "I0523 23:52:08.393093 693 log.go:172] (0xc0009bd600) (0xc000bbe460) Create stream\nI0523 23:52:08.393322 693 log.go:172] (0xc0009bd600) (0xc000bbe460) Stream added, broadcasting: 1\nI0523 23:52:08.398710 693 log.go:172] (0xc0009bd600) Reply frame received for 1\nI0523 23:52:08.398751 693 log.go:172] (0xc0009bd600) (0xc000750e60) Create stream\nI0523 23:52:08.398764 693 log.go:172] (0xc0009bd600) (0xc000750e60) Stream added, broadcasting: 3\nI0523 23:52:08.399666 693 log.go:172] (0xc0009bd600) Reply frame received for 3\nI0523 23:52:08.399697 693 log.go:172] (0xc0009bd600) (0xc000512500) Create stream\nI0523 23:52:08.399711 693 log.go:172] (0xc0009bd600) (0xc000512500) Stream added, broadcasting: 5\nI0523 23:52:08.400533 693 log.go:172] (0xc0009bd600) Reply frame received for 5\nI0523 23:52:08.468679 693 log.go:172] (0xc0009bd600) Data frame received for 5\nI0523 23:52:08.468703 693 log.go:172] (0xc000512500) (5) Data frame handling\nI0523 23:52:08.468718 693 log.go:172] (0xc000512500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0523 23:52:08.518669 693 log.go:172] (0xc0009bd600) Data frame received for 3\nI0523 23:52:08.518721 693 log.go:172] (0xc000750e60) (3) Data frame handling\nI0523 23:52:08.518875 693 log.go:172] (0xc000750e60) (3) Data frame sent\nI0523 23:52:08.519248 693 log.go:172] (0xc0009bd600) Data frame received for 3\nI0523 23:52:08.519276 693 log.go:172] (0xc000750e60) (3) Data frame handling\nI0523 23:52:08.519312 693 log.go:172] (0xc0009bd600) Data frame received for 5\nI0523 23:52:08.519363 693 log.go:172] (0xc000512500) (5) Data frame handling\nI0523 23:52:08.521049 693 log.go:172] (0xc0009bd600) Data frame received for 1\nI0523 23:52:08.521093 693 log.go:172] (0xc000bbe460) (1) Data frame handling\nI0523 23:52:08.521271 693 log.go:172] (0xc000bbe460) (1) Data frame sent\nI0523 23:52:08.521294 693 log.go:172] (0xc0009bd600) (0xc000bbe460) Stream removed, broadcasting: 1\nI0523 23:52:08.521329 693 log.go:172] (0xc0009bd600) Go away received\nI0523 23:52:08.521903 693 log.go:172] (0xc0009bd600) (0xc000bbe460) Stream removed, broadcasting: 1\nI0523 23:52:08.521930 693 log.go:172] (0xc0009bd600) (0xc000750e60) Stream removed, broadcasting: 3\nI0523 23:52:08.521946 693 log.go:172] (0xc0009bd600) (0xc000512500) Stream removed, broadcasting: 5\n" May 23 23:52:08.528: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 23 23:52:08.528: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 23 23:52:08.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-586 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 23 23:52:08.784: INFO: stderr: "I0523 23:52:08.655440 713 log.go:172] (0xc00003ac60) (0xc000246280) Create stream\nI0523 23:52:08.655486 713 log.go:172] (0xc00003ac60) (0xc000246280) Stream added, broadcasting: 1\nI0523 23:52:08.658229 713 log.go:172] (0xc00003ac60) Reply frame received for 1\nI0523 23:52:08.658259 713 log.go:172] (0xc00003ac60) (0xc0003460a0) Create stream\nI0523 23:52:08.658271 713 log.go:172] (0xc00003ac60) (0xc0003460a0) Stream added, broadcasting: 3\nI0523 23:52:08.659153 713 log.go:172] (0xc00003ac60) Reply frame received for 3\nI0523 23:52:08.659179 713 log.go:172] (0xc00003ac60) (0xc0004f0500) Create stream\nI0523 23:52:08.659188 713 log.go:172] (0xc00003ac60) (0xc0004f0500) Stream added, broadcasting: 5\nI0523 23:52:08.660222 713 log.go:172] (0xc00003ac60) Reply frame received for 5\nI0523 23:52:08.718501 713 log.go:172] (0xc00003ac60) Data frame received for 5\nI0523 23:52:08.718526 713 log.go:172] (0xc0004f0500) (5) Data frame handling\nI0523 23:52:08.718542 713 log.go:172] (0xc0004f0500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0523 23:52:08.776800 713 log.go:172] (0xc00003ac60) Data frame received for 3\nI0523 23:52:08.776844 713 log.go:172] (0xc0003460a0) (3) Data frame handling\nI0523 23:52:08.776867 713 log.go:172] (0xc0003460a0) (3) Data frame sent\nI0523 23:52:08.776895 713 log.go:172] (0xc00003ac60) Data frame received for 3\nI0523 23:52:08.776927 713 log.go:172] (0xc0003460a0) (3) Data frame handling\nI0523 23:52:08.776970 713 log.go:172] (0xc00003ac60) Data frame received for 5\nI0523 23:52:08.777001 713 log.go:172] (0xc0004f0500) (5) Data frame handling\nI0523 23:52:08.778784 713 log.go:172] (0xc00003ac60) Data frame received for 1\nI0523 23:52:08.778803 713 log.go:172] (0xc000246280) (1) Data frame handling\nI0523 23:52:08.778816 713 log.go:172] (0xc000246280) (1) Data frame sent\nI0523 23:52:08.778835 713 log.go:172] (0xc00003ac60) (0xc000246280) Stream removed, broadcasting: 1\nI0523 23:52:08.778853 713 log.go:172] (0xc00003ac60) Go away received\nI0523 23:52:08.779202 713 log.go:172] (0xc00003ac60) (0xc000246280) Stream removed, broadcasting: 1\nI0523 23:52:08.779219 713 log.go:172] (0xc00003ac60) (0xc0003460a0) Stream removed, broadcasting: 3\nI0523 23:52:08.779227 713 log.go:172] (0xc00003ac60) (0xc0004f0500) Stream removed, broadcasting: 5\n" May 23 23:52:08.784: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 23 23:52:08.784: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 23 23:52:08.784: INFO: Waiting for statefulset status.replicas updated to 0 May 23 23:52:08.787: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 23 23:52:18.798: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 23 23:52:18.798: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 23 23:52:18.798: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 23 23:52:18.824: INFO: POD NODE PHASE GRACE CONDITIONS May 23 23:52:18.824: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:36 +0000 UTC }] May 23 23:52:18.825: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:18.825: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:18.825: INFO: May 23 23:52:18.825: INFO: StatefulSet ss has not reached scale 0, at 3 May 23 23:52:20.001: INFO: POD NODE PHASE GRACE CONDITIONS May 23 23:52:20.001: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:36 +0000 UTC }] May 23 23:52:20.001: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:20.001: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:20.001: INFO: May 23 23:52:20.001: INFO: StatefulSet ss has not reached scale 0, at 3 May 23 23:52:21.005: INFO: POD NODE PHASE GRACE CONDITIONS May 23 23:52:21.005: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:36 +0000 UTC }] May 23 23:52:21.006: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:21.006: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:21.006: INFO: May 23 23:52:21.006: INFO: StatefulSet ss has not reached scale 0, at 3 May 23 23:52:22.010: INFO: POD NODE PHASE GRACE CONDITIONS May 23 23:52:22.010: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:36 +0000 UTC }] May 23 23:52:22.011: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:22.011: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:22.011: INFO: May 23 23:52:22.011: INFO: StatefulSet ss has not reached scale 0, at 3 May 23 23:52:23.015: INFO: POD NODE PHASE GRACE CONDITIONS May 23 23:52:23.015: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:36 +0000 UTC }] May 23 23:52:23.015: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:23.015: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:23.015: INFO: May 23 23:52:23.015: INFO: StatefulSet ss has not reached scale 0, at 3 May 23 23:52:24.021: INFO: POD NODE PHASE GRACE CONDITIONS May 23 23:52:24.021: INFO: ss-0 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:36 +0000 UTC }] May 23 23:52:24.021: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:24.021: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:24.021: INFO: May 23 23:52:24.021: INFO: StatefulSet ss has not reached scale 0, at 3 May 23 23:52:25.024: INFO: POD NODE PHASE GRACE CONDITIONS May 23 23:52:25.024: INFO: ss-1 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:25.025: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:52:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-23 23:51:57 +0000 UTC }] May 23 23:52:25.025: INFO: May 23 23:52:25.025: INFO: StatefulSet ss has not reached scale 0, at 2 May 23 23:52:26.029: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.778841513s May 23 23:52:27.033: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.774309443s May 23 23:52:28.037: INFO: Verifying statefulset ss doesn't scale past 0 for another 770.516284ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-586 May 23 23:52:29.041: INFO: Scaling statefulset ss to 0 May 23 23:52:29.050: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 23 23:52:29.052: INFO: Deleting all statefulset in ns statefulset-586 May 23 23:52:29.055: INFO: Scaling statefulset ss to 0 May 23 23:52:29.062: INFO: Waiting for statefulset status.replicas updated to 0 May 23 23:52:29.064: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:52:29.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-586" for this suite. • [SLOW TEST:52.261 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":288,"completed":77,"skipped":1387,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:52:29.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:52:29.178: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 23 23:52:29.212: INFO: Number of nodes with available pods: 0 May 23 23:52:29.212: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 23 23:52:29.322: INFO: Number of nodes with available pods: 0 May 23 23:52:29.322: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:30.455: INFO: Number of nodes with available pods: 0 May 23 23:52:30.455: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:31.326: INFO: Number of nodes with available pods: 0 May 23 23:52:31.326: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:32.326: INFO: Number of nodes with available pods: 0 May 23 23:52:32.326: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:33.326: INFO: Number of nodes with available pods: 1 May 23 23:52:33.326: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 23 23:52:33.366: INFO: Number of nodes with available pods: 1 May 23 23:52:33.366: INFO: Number of running nodes: 0, number of available pods: 1 May 23 23:52:34.376: INFO: Number of nodes with available pods: 0 May 23 23:52:34.376: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 23 23:52:34.447: INFO: Number of nodes with available pods: 0 May 23 23:52:34.447: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:35.451: INFO: Number of nodes with available pods: 0 May 23 23:52:35.451: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:36.451: INFO: Number of nodes with available pods: 0 May 23 23:52:36.452: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:37.452: INFO: Number of nodes with available pods: 0 May 23 23:52:37.452: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:38.452: INFO: Number of nodes with available pods: 0 May 23 23:52:38.452: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:39.452: INFO: Number of nodes with available pods: 0 May 23 23:52:39.452: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:40.452: INFO: Number of nodes with available pods: 0 May 23 23:52:40.452: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:41.452: INFO: Number of nodes with available pods: 0 May 23 23:52:41.452: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:42.451: INFO: Number of nodes with available pods: 0 May 23 23:52:42.451: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:43.452: INFO: Number of nodes with available pods: 0 May 23 23:52:43.452: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:44.451: INFO: Number of nodes with available pods: 0 May 23 23:52:44.452: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:45.452: INFO: Number of nodes with available pods: 0 May 23 23:52:45.452: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:46.452: INFO: Number of nodes with available pods: 0 May 23 23:52:46.452: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:47.490: INFO: Number of nodes with available pods: 0 May 23 23:52:47.490: INFO: Node latest-worker is running more than one daemon pod May 23 23:52:48.452: INFO: Number of nodes with available pods: 1 May 23 23:52:48.452: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6234, will wait for the garbage collector to delete the pods May 23 23:52:48.517: INFO: Deleting DaemonSet.extensions daemon-set took: 6.541866ms May 23 23:52:48.818: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.298517ms May 23 23:52:54.921: INFO: Number of nodes with available pods: 0 May 23 23:52:54.921: INFO: Number of running nodes: 0, number of available pods: 0 May 23 23:52:54.924: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6234/daemonsets","resourceVersion":"7146324"},"items":null} May 23 23:52:54.926: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6234/pods","resourceVersion":"7146324"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:52:54.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6234" for this suite. • [SLOW TEST:25.881 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":288,"completed":78,"skipped":1400,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:52:54.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 23 23:52:59.603: INFO: Successfully updated pod "pod-update-04aa7714-6536-425a-9d11-5b6efb592965" STEP: verifying the updated pod is in kubernetes May 23 23:52:59.618: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:52:59.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6481" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":288,"completed":79,"skipped":1419,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:52:59.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2268.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2268.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2268.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2268.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2268.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2268.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 23 23:53:05.780: INFO: DNS probes using dns-2268/dns-test-b925c7c7-f9df-4f19-bee1-76b1ccb5b334 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:53:05.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2268" for this suite. • [SLOW TEST:6.306 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":288,"completed":80,"skipped":1435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:53:05.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 23 23:53:06.367: INFO: Waiting up to 5m0s for pod "pod-56631d08-9b3b-4d67-ab76-585e203c74e9" in namespace "emptydir-6257" to be "Succeeded or Failed" May 23 23:53:06.382: INFO: Pod "pod-56631d08-9b3b-4d67-ab76-585e203c74e9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.830565ms May 23 23:53:08.447: INFO: Pod "pod-56631d08-9b3b-4d67-ab76-585e203c74e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080566185s May 23 23:53:10.452: INFO: Pod "pod-56631d08-9b3b-4d67-ab76-585e203c74e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084899142s May 23 23:53:12.455: INFO: Pod "pod-56631d08-9b3b-4d67-ab76-585e203c74e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088470318s STEP: Saw pod success May 23 23:53:12.455: INFO: Pod "pod-56631d08-9b3b-4d67-ab76-585e203c74e9" satisfied condition "Succeeded or Failed" May 23 23:53:12.457: INFO: Trying to get logs from node latest-worker2 pod pod-56631d08-9b3b-4d67-ab76-585e203c74e9 container test-container: STEP: delete the pod May 23 23:53:12.496: INFO: Waiting for pod pod-56631d08-9b3b-4d67-ab76-585e203c74e9 to disappear May 23 23:53:12.526: INFO: Pod pod-56631d08-9b3b-4d67-ab76-585e203c74e9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:53:12.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6257" for this suite. • [SLOW TEST:6.606 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":81,"skipped":1496,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:53:12.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:53:12.661: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 23 23:53:12.683: INFO: Pod name sample-pod: Found 0 pods out of 1 May 23 23:53:17.686: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 23 23:53:17.687: INFO: Creating deployment "test-rolling-update-deployment" May 23 23:53:17.698: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 23 23:53:17.734: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 23 23:53:19.741: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 23 23:53:19.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874797, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874797, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874797, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874797, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-df7bb669b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 23 23:53:21.748: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 23 23:53:21.759: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4088 /apis/apps/v1/namespaces/deployment-4088/deployments/test-rolling-update-deployment c7fe7380-1937-4299-bc27-4faba5ac47fd 7146566 1 2020-05-23 23:53:17 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-23 23:53:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-23 23:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e3b198 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-23 23:53:17 +0000 UTC,LastTransitionTime:2020-05-23 23:53:17 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-df7bb669b" has successfully progressed.,LastUpdateTime:2020-05-23 23:53:21 +0000 UTC,LastTransitionTime:2020-05-23 23:53:17 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 23 23:53:21.762: INFO: New ReplicaSet "test-rolling-update-deployment-df7bb669b" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-df7bb669b deployment-4088 /apis/apps/v1/namespaces/deployment-4088/replicasets/test-rolling-update-deployment-df7bb669b a11ff099-d321-4aad-8564-54a5251f12cd 7146555 1 2020-05-23 23:53:17 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment c7fe7380-1937-4299-bc27-4faba5ac47fd 0xc002432a60 0xc002432a61}] [] [{kube-controller-manager Update apps/v1 2020-05-23 23:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7fe7380-1937-4299-bc27-4faba5ac47fd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: df7bb669b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002432af8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 23 23:53:21.762: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 23 23:53:21.762: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4088 /apis/apps/v1/namespaces/deployment-4088/replicasets/test-rolling-update-controller 4f0e5db7-e417-487a-9a20-21726b0db316 7146565 2 2020-05-23 23:53:12 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment c7fe7380-1937-4299-bc27-4faba5ac47fd 0xc002432957 0xc002432958}] [] [{e2e.test Update apps/v1 2020-05-23 23:53:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-23 23:53:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7fe7380-1937-4299-bc27-4faba5ac47fd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0024329f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 23 23:53:21.766: INFO: Pod "test-rolling-update-deployment-df7bb669b-sqkhq" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-df7bb669b-sqkhq test-rolling-update-deployment-df7bb669b- deployment-4088 /api/v1/namespaces/deployment-4088/pods/test-rolling-update-deployment-df7bb669b-sqkhq 0c7c3ddc-3855-4fe0-9e06-ed75ad94953d 7146554 0 2020-05-23 23:53:17 +0000 UTC map[name:sample-pod pod-template-hash:df7bb669b] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-df7bb669b a11ff099-d321-4aad-8564-54a5251f12cd 0xc0025e8580 0xc0025e8581}] [] [{kube-controller-manager Update v1 2020-05-23 23:53:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a11ff099-d321-4aad-8564-54a5251f12cd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-23 23:53:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.103\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktwlq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktwlq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktwlq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 23:53:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 23:53:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 23:53:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-23 23:53:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.103,StartTime:2020-05-23 23:53:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-23 23:53:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://fad7c8e855fa9a15d344852880f3d62d2de1535cbe4d299b639d30d6408b2b41,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.103,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:53:21.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4088" for this suite. • [SLOW TEST:9.236 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":82,"skipped":1509,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:53:21.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 23 23:53:21.916: INFO: >>> kubeConfig: /root/.kube/config May 23 23:53:24.879: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:53:35.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4435" for this suite. • [SLOW TEST:13.847 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":288,"completed":83,"skipped":1519,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:53:35.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-813a15e2-97ba-42d6-be88-c133b7f3d514 in namespace container-probe-9913 May 23 23:53:39.738: INFO: Started pod liveness-813a15e2-97ba-42d6-be88-c133b7f3d514 in namespace container-probe-9913 STEP: checking the pod's current state and verifying that restartCount is present May 23 23:53:39.742: INFO: Initial restart count of pod liveness-813a15e2-97ba-42d6-be88-c133b7f3d514 is 0 May 23 23:53:55.783: INFO: Restart count of pod container-probe-9913/liveness-813a15e2-97ba-42d6-be88-c133b7f3d514 is now 1 (16.041286747s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:53:55.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9913" for this suite. • [SLOW TEST:20.212 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":84,"skipped":1523,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:53:55.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-7375 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7375 to expose endpoints map[] May 23 23:53:56.150: INFO: Get endpoints failed (7.258249ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 23 23:53:57.153: INFO: successfully validated that service multi-endpoint-test in namespace services-7375 exposes endpoints map[] (1.01077215s elapsed) STEP: Creating pod pod1 in namespace services-7375 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7375 to expose endpoints map[pod1:[100]] May 23 23:54:00.246: INFO: successfully validated that service multi-endpoint-test in namespace services-7375 exposes endpoints map[pod1:[100]] (3.084988373s elapsed) STEP: Creating pod pod2 in namespace services-7375 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7375 to expose endpoints map[pod1:[100] pod2:[101]] May 23 23:54:04.359: INFO: successfully validated that service multi-endpoint-test in namespace services-7375 exposes endpoints map[pod1:[100] pod2:[101]] (4.108077262s elapsed) STEP: Deleting pod pod1 in namespace services-7375 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7375 to expose endpoints map[pod2:[101]] May 23 23:54:05.425: INFO: successfully validated that service multi-endpoint-test in namespace services-7375 exposes endpoints map[pod2:[101]] (1.060875815s elapsed) STEP: Deleting pod pod2 in namespace services-7375 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7375 to expose endpoints map[] May 23 23:54:06.460: INFO: successfully validated that service multi-endpoint-test in namespace services-7375 exposes endpoints map[] (1.030983485s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:54:06.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7375" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:10.691 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":288,"completed":85,"skipped":1530,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:54:06.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-1db4b376-14b3-46b8-9199-4958d96e47e5 STEP: Creating a pod to test consume configMaps May 23 23:54:06.593: INFO: Waiting up to 5m0s for pod "pod-configmaps-1b30be65-bbd0-4761-8d22-b14cb0cbfc3d" in namespace "configmap-3626" to be "Succeeded or Failed" May 23 23:54:06.603: INFO: Pod "pod-configmaps-1b30be65-bbd0-4761-8d22-b14cb0cbfc3d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.982926ms May 23 23:54:08.608: INFO: Pod "pod-configmaps-1b30be65-bbd0-4761-8d22-b14cb0cbfc3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014945329s May 23 23:54:10.612: INFO: Pod "pod-configmaps-1b30be65-bbd0-4761-8d22-b14cb0cbfc3d": Phase="Running", Reason="", readiness=true. Elapsed: 4.019058505s May 23 23:54:12.617: INFO: Pod "pod-configmaps-1b30be65-bbd0-4761-8d22-b14cb0cbfc3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024324053s STEP: Saw pod success May 23 23:54:12.617: INFO: Pod "pod-configmaps-1b30be65-bbd0-4761-8d22-b14cb0cbfc3d" satisfied condition "Succeeded or Failed" May 23 23:54:12.620: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-1b30be65-bbd0-4761-8d22-b14cb0cbfc3d container configmap-volume-test: STEP: delete the pod May 23 23:54:12.651: INFO: Waiting for pod pod-configmaps-1b30be65-bbd0-4761-8d22-b14cb0cbfc3d to disappear May 23 23:54:12.665: INFO: Pod pod-configmaps-1b30be65-bbd0-4761-8d22-b14cb0cbfc3d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:54:12.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3626" for this suite. • [SLOW TEST:6.148 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":86,"skipped":1533,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:54:12.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-bed0ca90-9d76-446d-bdf8-2d566059e5f8 STEP: Creating a pod to test consume secrets May 23 23:54:12.763: INFO: Waiting up to 5m0s for pod "pod-secrets-31fd0154-0bc8-477d-b3db-1f1814c4ac0e" in namespace "secrets-6739" to be "Succeeded or Failed" May 23 23:54:12.778: INFO: Pod "pod-secrets-31fd0154-0bc8-477d-b3db-1f1814c4ac0e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.478573ms May 23 23:54:14.844: INFO: Pod "pod-secrets-31fd0154-0bc8-477d-b3db-1f1814c4ac0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080203273s May 23 23:54:16.847: INFO: Pod "pod-secrets-31fd0154-0bc8-477d-b3db-1f1814c4ac0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08381267s STEP: Saw pod success May 23 23:54:16.847: INFO: Pod "pod-secrets-31fd0154-0bc8-477d-b3db-1f1814c4ac0e" satisfied condition "Succeeded or Failed" May 23 23:54:16.850: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-31fd0154-0bc8-477d-b3db-1f1814c4ac0e container secret-volume-test: STEP: delete the pod May 23 23:54:16.880: INFO: Waiting for pod pod-secrets-31fd0154-0bc8-477d-b3db-1f1814c4ac0e to disappear May 23 23:54:16.893: INFO: Pod pod-secrets-31fd0154-0bc8-477d-b3db-1f1814c4ac0e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:54:16.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6739" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":87,"skipped":1538,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:54:16.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-b7e3bf09-19e0-47eb-9646-ab9eece567b2 STEP: Creating a pod to test consume secrets May 23 23:54:17.361: INFO: Waiting up to 5m0s for pod "pod-secrets-d34cfaa4-9cdd-4ef4-92c1-c8a637c3b8d1" in namespace "secrets-5546" to be "Succeeded or Failed" May 23 23:54:17.397: INFO: Pod "pod-secrets-d34cfaa4-9cdd-4ef4-92c1-c8a637c3b8d1": Phase="Pending", Reason="", readiness=false. Elapsed: 35.708468ms May 23 23:54:19.424: INFO: Pod "pod-secrets-d34cfaa4-9cdd-4ef4-92c1-c8a637c3b8d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063194348s May 23 23:54:21.428: INFO: Pod "pod-secrets-d34cfaa4-9cdd-4ef4-92c1-c8a637c3b8d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06725867s STEP: Saw pod success May 23 23:54:21.428: INFO: Pod "pod-secrets-d34cfaa4-9cdd-4ef4-92c1-c8a637c3b8d1" satisfied condition "Succeeded or Failed" May 23 23:54:21.431: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-d34cfaa4-9cdd-4ef4-92c1-c8a637c3b8d1 container secret-volume-test: STEP: delete the pod May 23 23:54:21.510: INFO: Waiting for pod pod-secrets-d34cfaa4-9cdd-4ef4-92c1-c8a637c3b8d1 to disappear May 23 23:54:21.574: INFO: Pod pod-secrets-d34cfaa4-9cdd-4ef4-92c1-c8a637c3b8d1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:54:21.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5546" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":88,"skipped":1538,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:54:21.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 23 23:54:21.702: INFO: Waiting up to 5m0s for pod "pod-7b6ea9b3-f9e8-4824-b8db-f797be6b6e01" in namespace "emptydir-6427" to be "Succeeded or Failed" May 23 23:54:21.706: INFO: Pod "pod-7b6ea9b3-f9e8-4824-b8db-f797be6b6e01": Phase="Pending", Reason="", readiness=false. Elapsed: 3.930063ms May 23 23:54:23.718: INFO: Pod "pod-7b6ea9b3-f9e8-4824-b8db-f797be6b6e01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016248624s May 23 23:54:25.722: INFO: Pod "pod-7b6ea9b3-f9e8-4824-b8db-f797be6b6e01": Phase="Running", Reason="", readiness=true. Elapsed: 4.02015508s May 23 23:54:27.726: INFO: Pod "pod-7b6ea9b3-f9e8-4824-b8db-f797be6b6e01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023997036s STEP: Saw pod success May 23 23:54:27.726: INFO: Pod "pod-7b6ea9b3-f9e8-4824-b8db-f797be6b6e01" satisfied condition "Succeeded or Failed" May 23 23:54:27.729: INFO: Trying to get logs from node latest-worker2 pod pod-7b6ea9b3-f9e8-4824-b8db-f797be6b6e01 container test-container: STEP: delete the pod May 23 23:54:27.776: INFO: Waiting for pod pod-7b6ea9b3-f9e8-4824-b8db-f797be6b6e01 to disappear May 23 23:54:27.798: INFO: Pod pod-7b6ea9b3-f9e8-4824-b8db-f797be6b6e01 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:54:27.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6427" for this suite. • [SLOW TEST:6.205 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":89,"skipped":1550,"failed":0} [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:54:27.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:54:27.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-183" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":288,"completed":90,"skipped":1550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:54:28.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:54:28.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config version' May 23 23:54:28.233: INFO: stderr: "" May 23 23:54:28.233: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.3.35+3416442e4b7eeb\", GitCommit:\"3416442e4b7eebfce360f5b7468c6818d3e882f8\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:24:24Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:54:28.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4750" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":288,"completed":91,"skipped":1590,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:54:28.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 23 23:54:28.309: INFO: PodSpec: initContainers in spec.initContainers May 23 23:55:16.528: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e1193941-7078-4f54-9eae-34d4c8620ac4", GenerateName:"", Namespace:"init-container-1651", SelfLink:"/api/v1/namespaces/init-container-1651/pods/pod-init-e1193941-7078-4f54-9eae-34d4c8620ac4", UID:"baef308b-8dea-4112-90ee-febc745fafd4", ResourceVersion:"7147193", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725874868, loc:(*time.Location)(0x7c342a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"309720201"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002cc15e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002cc1620)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002cc1660), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002cc16a0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-wrkg4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002787880), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wrkg4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wrkg4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wrkg4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002b0bdb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00129ed20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b0be40)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b0be60)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002b0be68), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002b0be6c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874868, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874868, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874868, loc:(*time.Location)(0x7c342a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725874868, loc:(*time.Location)(0x7c342a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.13", PodIP:"10.244.1.106", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.106"}}, StartTime:(*v1.Time)(0xc002cc16e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00129eee0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00129ef50)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://53bd7bec19796b64b8d327d5d7c270a58282c4928c84b79564d871eacf25c932", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002cc1720), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002cc1700), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002b0bf6f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:55:16.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1651" for this suite. • [SLOW TEST:48.345 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":288,"completed":92,"skipped":1641,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:55:16.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server May 23 23:55:16.660: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:55:16.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4504" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":288,"completed":93,"skipped":1645,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:55:16.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1311 STEP: creating the pod May 23 23:55:16.802: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8316' May 23 23:55:18.215: INFO: stderr: "" May 23 23:55:18.215: INFO: stdout: "pod/pause created\n" May 23 23:55:18.215: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 23 23:55:18.215: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-8316" to be "running and ready" May 23 23:55:18.229: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.954443ms May 23 23:55:20.233: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018080663s May 23 23:55:22.238: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.02267705s May 23 23:55:22.238: INFO: Pod "pause" satisfied condition "running and ready" May 23 23:55:22.238: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod May 23 23:55:22.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-8316' May 23 23:55:22.358: INFO: stderr: "" May 23 23:55:22.358: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 23 23:55:22.358: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8316' May 23 23:55:22.462: INFO: stderr: "" May 23 23:55:22.462: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 23 23:55:22.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-8316' May 23 23:55:22.566: INFO: stderr: "" May 23 23:55:22.566: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 23 23:55:22.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-8316' May 23 23:55:22.667: INFO: stderr: "" May 23 23:55:22.667: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: using delete to clean up resources May 23 23:55:22.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8316' May 23 23:55:22.812: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 23 23:55:22.812: INFO: stdout: "pod \"pause\" force deleted\n" May 23 23:55:22.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-8316' May 23 23:55:22.931: INFO: stderr: "No resources found in kubectl-8316 namespace.\n" May 23 23:55:22.932: INFO: stdout: "" May 23 23:55:22.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-8316 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 23 23:55:23.031: INFO: stderr: "" May 23 23:55:23.031: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:55:23.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8316" for this suite. • [SLOW TEST:6.319 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":288,"completed":94,"skipped":1647,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:55:23.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 23 23:55:27.807: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d0dccfb5-479b-4e63-ac85-bb739aae454f" May 23 23:55:27.807: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d0dccfb5-479b-4e63-ac85-bb739aae454f" in namespace "pods-4812" to be "terminated due to deadline exceeded" May 23 23:55:27.824: INFO: Pod "pod-update-activedeadlineseconds-d0dccfb5-479b-4e63-ac85-bb739aae454f": Phase="Running", Reason="", readiness=true. Elapsed: 16.452752ms May 23 23:55:29.829: INFO: Pod "pod-update-activedeadlineseconds-d0dccfb5-479b-4e63-ac85-bb739aae454f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.021632701s May 23 23:55:29.829: INFO: Pod "pod-update-activedeadlineseconds-d0dccfb5-479b-4e63-ac85-bb739aae454f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:55:29.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4812" for this suite. • [SLOW TEST:6.759 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":288,"completed":95,"skipped":1656,"failed":0} SS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:55:29.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-b9b99cf0-865f-4d1d-bdab-f0e639dee439 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:55:29.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5343" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":288,"completed":96,"skipped":1658,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:55:29.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:55:45.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4265" for this suite. STEP: Destroying namespace "nsdeletetest-65" for this suite. May 23 23:55:45.254: INFO: Namespace nsdeletetest-65 was already deleted STEP: Destroying namespace "nsdeletetest-2529" for this suite. • [SLOW TEST:15.267 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":288,"completed":97,"skipped":1683,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:55:45.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:55:45.313: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 23 23:55:48.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7611 create -f -' May 23 23:55:52.315: INFO: stderr: "" May 23 23:55:52.315: INFO: stdout: "e2e-test-crd-publish-openapi-270-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 23 23:55:52.315: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7611 delete e2e-test-crd-publish-openapi-270-crds test-cr' May 23 23:55:52.432: INFO: stderr: "" May 23 23:55:52.432: INFO: stdout: "e2e-test-crd-publish-openapi-270-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 23 23:55:52.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7611 apply -f -' May 23 23:55:55.012: INFO: stderr: "" May 23 23:55:55.012: INFO: stdout: "e2e-test-crd-publish-openapi-270-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 23 23:55:55.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7611 delete e2e-test-crd-publish-openapi-270-crds test-cr' May 23 23:55:55.126: INFO: stderr: "" May 23 23:55:55.126: INFO: stdout: "e2e-test-crd-publish-openapi-270-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 23 23:55:55.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-270-crds' May 23 23:55:57.613: INFO: stderr: "" May 23 23:55:57.613: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-270-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:56:00.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7611" for this suite. • [SLOW TEST:15.316 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":288,"completed":98,"skipped":1683,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:56:00.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:56:00.635: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:56:01.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2358" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":288,"completed":99,"skipped":1695,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:56:01.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-26e8ff9f-60b3-4ba9-888b-c4fe5f650142 STEP: updating the pod May 23 23:56:10.296: INFO: Successfully updated pod "var-expansion-26e8ff9f-60b3-4ba9-888b-c4fe5f650142" STEP: waiting for pod and container restart STEP: Failing liveness probe May 23 23:56:10.319: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-4300 PodName:var-expansion-26e8ff9f-60b3-4ba9-888b-c4fe5f650142 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 23 23:56:10.319: INFO: >>> kubeConfig: /root/.kube/config I0523 23:56:10.365915 7 log.go:172] (0xc002c2c370) (0xc0017f3180) Create stream I0523 23:56:10.365960 7 log.go:172] (0xc002c2c370) (0xc0017f3180) Stream added, broadcasting: 1 I0523 23:56:10.367634 7 log.go:172] (0xc002c2c370) Reply frame received for 1 I0523 23:56:10.367687 7 log.go:172] (0xc002c2c370) (0xc0006450e0) Create stream I0523 23:56:10.367702 7 log.go:172] (0xc002c2c370) (0xc0006450e0) Stream added, broadcasting: 3 I0523 23:56:10.368544 7 log.go:172] (0xc002c2c370) Reply frame received for 3 I0523 23:56:10.368569 7 log.go:172] (0xc002c2c370) (0xc0017f3360) Create stream I0523 23:56:10.368580 7 log.go:172] (0xc002c2c370) (0xc0017f3360) Stream added, broadcasting: 5 I0523 23:56:10.369674 7 log.go:172] (0xc002c2c370) Reply frame received for 5 I0523 23:56:10.444365 7 log.go:172] (0xc002c2c370) Data frame received for 3 I0523 23:56:10.444400 7 log.go:172] (0xc0006450e0) (3) Data frame handling I0523 23:56:10.444420 7 log.go:172] (0xc002c2c370) Data frame received for 5 I0523 23:56:10.444429 7 log.go:172] (0xc0017f3360) (5) Data frame handling I0523 23:56:10.445994 7 log.go:172] (0xc002c2c370) Data frame received for 1 I0523 23:56:10.446013 7 log.go:172] (0xc0017f3180) (1) Data frame handling I0523 23:56:10.446024 7 log.go:172] (0xc0017f3180) (1) Data frame sent I0523 23:56:10.446038 7 log.go:172] (0xc002c2c370) (0xc0017f3180) Stream removed, broadcasting: 1 I0523 23:56:10.446051 7 log.go:172] (0xc002c2c370) Go away received I0523 23:56:10.446229 7 log.go:172] (0xc002c2c370) (0xc0017f3180) Stream removed, broadcasting: 1 I0523 23:56:10.446273 7 log.go:172] (0xc002c2c370) (0xc0006450e0) Stream removed, broadcasting: 3 I0523 23:56:10.446288 7 log.go:172] (0xc002c2c370) (0xc0017f3360) Stream removed, broadcasting: 5 May 23 23:56:10.446: INFO: Pod exec output: / STEP: Waiting for container to restart May 23 23:56:10.450: INFO: Container dapi-container, restarts: 0 May 23 23:56:20.457: INFO: Container dapi-container, restarts: 0 May 23 23:56:30.455: INFO: Container dapi-container, restarts: 0 May 23 23:56:40.455: INFO: Container dapi-container, restarts: 0 May 23 23:56:50.455: INFO: Container dapi-container, restarts: 1 May 23 23:56:50.455: INFO: Container has restart count: 1 STEP: Rewriting the file May 23 23:56:50.455: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-4300 PodName:var-expansion-26e8ff9f-60b3-4ba9-888b-c4fe5f650142 ContainerName:side-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 23 23:56:50.455: INFO: >>> kubeConfig: /root/.kube/config I0523 23:56:50.492195 7 log.go:172] (0xc003fce370) (0xc000673680) Create stream I0523 23:56:50.492222 7 log.go:172] (0xc003fce370) (0xc000673680) Stream added, broadcasting: 1 I0523 23:56:50.494111 7 log.go:172] (0xc003fce370) Reply frame received for 1 I0523 23:56:50.494178 7 log.go:172] (0xc003fce370) (0xc0012aa320) Create stream I0523 23:56:50.494237 7 log.go:172] (0xc003fce370) (0xc0012aa320) Stream added, broadcasting: 3 I0523 23:56:50.495258 7 log.go:172] (0xc003fce370) Reply frame received for 3 I0523 23:56:50.495301 7 log.go:172] (0xc003fce370) (0xc000645e00) Create stream I0523 23:56:50.495325 7 log.go:172] (0xc003fce370) (0xc000645e00) Stream added, broadcasting: 5 I0523 23:56:50.496456 7 log.go:172] (0xc003fce370) Reply frame received for 5 I0523 23:56:50.589354 7 log.go:172] (0xc003fce370) Data frame received for 5 I0523 23:56:50.589386 7 log.go:172] (0xc000645e00) (5) Data frame handling I0523 23:56:50.589411 7 log.go:172] (0xc003fce370) Data frame received for 3 I0523 23:56:50.589423 7 log.go:172] (0xc0012aa320) (3) Data frame handling I0523 23:56:50.591113 7 log.go:172] (0xc003fce370) Data frame received for 1 I0523 23:56:50.591148 7 log.go:172] (0xc000673680) (1) Data frame handling I0523 23:56:50.591169 7 log.go:172] (0xc000673680) (1) Data frame sent I0523 23:56:50.591184 7 log.go:172] (0xc003fce370) (0xc000673680) Stream removed, broadcasting: 1 I0523 23:56:50.591211 7 log.go:172] (0xc003fce370) Go away received I0523 23:56:50.591275 7 log.go:172] (0xc003fce370) (0xc000673680) Stream removed, broadcasting: 1 I0523 23:56:50.591298 7 log.go:172] (0xc003fce370) (0xc0012aa320) Stream removed, broadcasting: 3 I0523 23:56:50.591319 7 log.go:172] (0xc003fce370) (0xc000645e00) Stream removed, broadcasting: 5 May 23 23:56:50.591: INFO: Exec stderr: "" May 23 23:56:50.591: INFO: Pod exec output: STEP: Waiting for container to stop restarting May 23 23:57:16.599: INFO: Container has restart count: 2 May 23 23:58:18.600: INFO: Container restart has stabilized STEP: test for subpath mounted with old value May 23 23:58:18.606: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-4300 PodName:var-expansion-26e8ff9f-60b3-4ba9-888b-c4fe5f650142 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 23 23:58:18.606: INFO: >>> kubeConfig: /root/.kube/config I0523 23:58:18.631865 7 log.go:172] (0xc0026ec2c0) (0xc0010dde00) Create stream I0523 23:58:18.631888 7 log.go:172] (0xc0026ec2c0) (0xc0010dde00) Stream added, broadcasting: 1 I0523 23:58:18.633441 7 log.go:172] (0xc0026ec2c0) Reply frame received for 1 I0523 23:58:18.633472 7 log.go:172] (0xc0026ec2c0) (0xc001e7c6e0) Create stream I0523 23:58:18.633485 7 log.go:172] (0xc0026ec2c0) (0xc001e7c6e0) Stream added, broadcasting: 3 I0523 23:58:18.634210 7 log.go:172] (0xc0026ec2c0) Reply frame received for 3 I0523 23:58:18.634239 7 log.go:172] (0xc0026ec2c0) (0xc0010ddf40) Create stream I0523 23:58:18.634249 7 log.go:172] (0xc0026ec2c0) (0xc0010ddf40) Stream added, broadcasting: 5 I0523 23:58:18.634963 7 log.go:172] (0xc0026ec2c0) Reply frame received for 5 I0523 23:58:18.716620 7 log.go:172] (0xc0026ec2c0) Data frame received for 3 I0523 23:58:18.716665 7 log.go:172] (0xc001e7c6e0) (3) Data frame handling I0523 23:58:18.716717 7 log.go:172] (0xc0026ec2c0) Data frame received for 5 I0523 23:58:18.716740 7 log.go:172] (0xc0010ddf40) (5) Data frame handling I0523 23:58:18.718500 7 log.go:172] (0xc0026ec2c0) Data frame received for 1 I0523 23:58:18.718560 7 log.go:172] (0xc0010dde00) (1) Data frame handling I0523 23:58:18.718602 7 log.go:172] (0xc0010dde00) (1) Data frame sent I0523 23:58:18.718643 7 log.go:172] (0xc0026ec2c0) (0xc0010dde00) Stream removed, broadcasting: 1 I0523 23:58:18.718682 7 log.go:172] (0xc0026ec2c0) Go away received I0523 23:58:18.718802 7 log.go:172] (0xc0026ec2c0) (0xc0010dde00) Stream removed, broadcasting: 1 I0523 23:58:18.718829 7 log.go:172] (0xc0026ec2c0) (0xc001e7c6e0) Stream removed, broadcasting: 3 I0523 23:58:18.718866 7 log.go:172] (0xc0026ec2c0) (0xc0010ddf40) Stream removed, broadcasting: 5 May 23 23:58:18.723: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-4300 PodName:var-expansion-26e8ff9f-60b3-4ba9-888b-c4fe5f650142 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 23 23:58:18.723: INFO: >>> kubeConfig: /root/.kube/config I0523 23:58:18.753785 7 log.go:172] (0xc002c2c580) (0xc001f108c0) Create stream I0523 23:58:18.753830 7 log.go:172] (0xc002c2c580) (0xc001f108c0) Stream added, broadcasting: 1 I0523 23:58:18.755753 7 log.go:172] (0xc002c2c580) Reply frame received for 1 I0523 23:58:18.755847 7 log.go:172] (0xc002c2c580) (0xc001e7c820) Create stream I0523 23:58:18.755869 7 log.go:172] (0xc002c2c580) (0xc001e7c820) Stream added, broadcasting: 3 I0523 23:58:18.756617 7 log.go:172] (0xc002c2c580) Reply frame received for 3 I0523 23:58:18.756646 7 log.go:172] (0xc002c2c580) (0xc001e7c960) Create stream I0523 23:58:18.756656 7 log.go:172] (0xc002c2c580) (0xc001e7c960) Stream added, broadcasting: 5 I0523 23:58:18.757658 7 log.go:172] (0xc002c2c580) Reply frame received for 5 I0523 23:58:18.816510 7 log.go:172] (0xc002c2c580) Data frame received for 3 I0523 23:58:18.816539 7 log.go:172] (0xc001e7c820) (3) Data frame handling I0523 23:58:18.816564 7 log.go:172] (0xc002c2c580) Data frame received for 5 I0523 23:58:18.816587 7 log.go:172] (0xc001e7c960) (5) Data frame handling I0523 23:58:18.817929 7 log.go:172] (0xc002c2c580) Data frame received for 1 I0523 23:58:18.817971 7 log.go:172] (0xc001f108c0) (1) Data frame handling I0523 23:58:18.817996 7 log.go:172] (0xc001f108c0) (1) Data frame sent I0523 23:58:18.818019 7 log.go:172] (0xc002c2c580) (0xc001f108c0) Stream removed, broadcasting: 1 I0523 23:58:18.818109 7 log.go:172] (0xc002c2c580) (0xc001f108c0) Stream removed, broadcasting: 1 I0523 23:58:18.818128 7 log.go:172] (0xc002c2c580) (0xc001e7c820) Stream removed, broadcasting: 3 I0523 23:58:18.818371 7 log.go:172] (0xc002c2c580) Go away received I0523 23:58:18.818426 7 log.go:172] (0xc002c2c580) (0xc001e7c960) Stream removed, broadcasting: 5 May 23 23:58:18.818: INFO: Deleting pod "var-expansion-26e8ff9f-60b3-4ba9-888b-c4fe5f650142" in namespace "var-expansion-4300" May 23 23:58:18.860: INFO: Wait up to 5m0s for pod "var-expansion-26e8ff9f-60b3-4ba9-888b-c4fe5f650142" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:58:52.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4300" for this suite. • [SLOW TEST:171.188 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":288,"completed":100,"skipped":1698,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:58:52.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:58:52.993: INFO: The status of Pod test-webserver-67a0008e-055f-4ac3-b248-78ca46190661 is Pending, waiting for it to be Running (with Ready = true) May 23 23:58:55.032: INFO: The status of Pod test-webserver-67a0008e-055f-4ac3-b248-78ca46190661 is Pending, waiting for it to be Running (with Ready = true) May 23 23:58:56.997: INFO: The status of Pod test-webserver-67a0008e-055f-4ac3-b248-78ca46190661 is Running (Ready = false) May 23 23:58:58.997: INFO: The status of Pod test-webserver-67a0008e-055f-4ac3-b248-78ca46190661 is Running (Ready = false) May 23 23:59:00.997: INFO: The status of Pod test-webserver-67a0008e-055f-4ac3-b248-78ca46190661 is Running (Ready = false) May 23 23:59:02.997: INFO: The status of Pod test-webserver-67a0008e-055f-4ac3-b248-78ca46190661 is Running (Ready = false) May 23 23:59:05.020: INFO: The status of Pod test-webserver-67a0008e-055f-4ac3-b248-78ca46190661 is Running (Ready = false) May 23 23:59:06.997: INFO: The status of Pod test-webserver-67a0008e-055f-4ac3-b248-78ca46190661 is Running (Ready = false) May 23 23:59:08.997: INFO: The status of Pod test-webserver-67a0008e-055f-4ac3-b248-78ca46190661 is Running (Ready = false) May 23 23:59:10.997: INFO: The status of Pod test-webserver-67a0008e-055f-4ac3-b248-78ca46190661 is Running (Ready = false) May 23 23:59:12.998: INFO: The status of Pod test-webserver-67a0008e-055f-4ac3-b248-78ca46190661 is Running (Ready = false) May 23 23:59:14.996: INFO: The status of Pod test-webserver-67a0008e-055f-4ac3-b248-78ca46190661 is Running (Ready = false) May 23 23:59:16.997: INFO: The status of Pod test-webserver-67a0008e-055f-4ac3-b248-78ca46190661 is Running (Ready = true) May 23 23:59:16.999: INFO: Container started at 2020-05-23 23:58:55 +0000 UTC, pod became ready at 2020-05-23 23:59:16 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:59:17.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3919" for this suite. • [SLOW TEST:24.114 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":288,"completed":101,"skipped":1719,"failed":0} [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:59:17.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 23 23:59:17.102: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-379' May 23 23:59:19.036: INFO: stderr: "" May 23 23:59:19.036: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 23 23:59:19.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-379' May 23 23:59:19.188: INFO: stderr: "" May 23 23:59:19.188: INFO: stdout: "update-demo-nautilus-8kxrj update-demo-nautilus-92n9b " May 23 23:59:19.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8kxrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-379' May 23 23:59:19.281: INFO: stderr: "" May 23 23:59:19.281: INFO: stdout: "" May 23 23:59:19.281: INFO: update-demo-nautilus-8kxrj is created but not running May 23 23:59:24.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-379' May 23 23:59:24.408: INFO: stderr: "" May 23 23:59:24.408: INFO: stdout: "update-demo-nautilus-8kxrj update-demo-nautilus-92n9b " May 23 23:59:24.408: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8kxrj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-379' May 23 23:59:24.525: INFO: stderr: "" May 23 23:59:24.525: INFO: stdout: "true" May 23 23:59:24.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8kxrj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-379' May 23 23:59:24.620: INFO: stderr: "" May 23 23:59:24.620: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 23 23:59:24.620: INFO: validating pod update-demo-nautilus-8kxrj May 23 23:59:24.630: INFO: got data: { "image": "nautilus.jpg" } May 23 23:59:24.630: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 23 23:59:24.630: INFO: update-demo-nautilus-8kxrj is verified up and running May 23 23:59:24.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92n9b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-379' May 23 23:59:24.729: INFO: stderr: "" May 23 23:59:24.729: INFO: stdout: "true" May 23 23:59:24.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92n9b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-379' May 23 23:59:24.831: INFO: stderr: "" May 23 23:59:24.831: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 23 23:59:24.831: INFO: validating pod update-demo-nautilus-92n9b May 23 23:59:24.835: INFO: got data: { "image": "nautilus.jpg" } May 23 23:59:24.835: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 23 23:59:24.835: INFO: update-demo-nautilus-92n9b is verified up and running STEP: using delete to clean up resources May 23 23:59:24.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-379' May 23 23:59:24.942: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 23 23:59:24.942: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 23 23:59:24.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-379' May 23 23:59:25.040: INFO: stderr: "No resources found in kubectl-379 namespace.\n" May 23 23:59:25.040: INFO: stdout: "" May 23 23:59:25.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-379 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 23 23:59:25.161: INFO: stderr: "" May 23 23:59:25.161: INFO: stdout: "update-demo-nautilus-8kxrj\nupdate-demo-nautilus-92n9b\n" May 23 23:59:25.661: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-379' May 23 23:59:25.771: INFO: stderr: "No resources found in kubectl-379 namespace.\n" May 23 23:59:25.771: INFO: stdout: "" May 23 23:59:25.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-379 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 23 23:59:25.873: INFO: stderr: "" May 23 23:59:25.873: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:59:25.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-379" for this suite. • [SLOW TEST:8.870 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":288,"completed":102,"skipped":1719,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:59:25.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4084 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-4084 I0523 23:59:26.590076 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4084, replica count: 2 I0523 23:59:29.640475 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0523 23:59:32.640693 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 23 23:59:32.640: INFO: Creating new exec pod May 23 23:59:37.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4084 execpod5sc58 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 23 23:59:37.934: INFO: stderr: "I0523 23:59:37.821274 1291 log.go:172] (0xc0006bc790) (0xc0006975e0) Create stream\nI0523 23:59:37.821315 1291 log.go:172] (0xc0006bc790) (0xc0006975e0) Stream added, broadcasting: 1\nI0523 23:59:37.822751 1291 log.go:172] (0xc0006bc790) Reply frame received for 1\nI0523 23:59:37.822777 1291 log.go:172] (0xc0006bc790) (0xc0001390e0) Create stream\nI0523 23:59:37.822787 1291 log.go:172] (0xc0006bc790) (0xc0001390e0) Stream added, broadcasting: 3\nI0523 23:59:37.823590 1291 log.go:172] (0xc0006bc790) Reply frame received for 3\nI0523 23:59:37.823641 1291 log.go:172] (0xc0006bc790) (0xc000697680) Create stream\nI0523 23:59:37.823655 1291 log.go:172] (0xc0006bc790) (0xc000697680) Stream added, broadcasting: 5\nI0523 23:59:37.824290 1291 log.go:172] (0xc0006bc790) Reply frame received for 5\nI0523 23:59:37.922016 1291 log.go:172] (0xc0006bc790) Data frame received for 5\nI0523 23:59:37.922040 1291 log.go:172] (0xc000697680) (5) Data frame handling\nI0523 23:59:37.922056 1291 log.go:172] (0xc000697680) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0523 23:59:37.926594 1291 log.go:172] (0xc0006bc790) Data frame received for 5\nI0523 23:59:37.926620 1291 log.go:172] (0xc000697680) (5) Data frame handling\nI0523 23:59:37.926645 1291 log.go:172] (0xc000697680) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0523 23:59:37.927702 1291 log.go:172] (0xc0006bc790) Data frame received for 3\nI0523 23:59:37.927724 1291 log.go:172] (0xc0001390e0) (3) Data frame handling\nI0523 23:59:37.927749 1291 log.go:172] (0xc0006bc790) Data frame received for 5\nI0523 23:59:37.927758 1291 log.go:172] (0xc000697680) (5) Data frame handling\nI0523 23:59:37.929765 1291 log.go:172] (0xc0006bc790) Data frame received for 1\nI0523 23:59:37.929791 1291 log.go:172] (0xc0006975e0) (1) Data frame handling\nI0523 23:59:37.929809 1291 log.go:172] (0xc0006975e0) (1) Data frame sent\nI0523 23:59:37.929829 1291 log.go:172] (0xc0006bc790) (0xc0006975e0) Stream removed, broadcasting: 1\nI0523 23:59:37.929849 1291 log.go:172] (0xc0006bc790) Go away received\nI0523 23:59:37.930343 1291 log.go:172] (0xc0006bc790) (0xc0006975e0) Stream removed, broadcasting: 1\nI0523 23:59:37.930363 1291 log.go:172] (0xc0006bc790) (0xc0001390e0) Stream removed, broadcasting: 3\nI0523 23:59:37.930373 1291 log.go:172] (0xc0006bc790) (0xc000697680) Stream removed, broadcasting: 5\n" May 23 23:59:37.935: INFO: stdout: "" May 23 23:59:37.935: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4084 execpod5sc58 -- /bin/sh -x -c nc -zv -t -w 2 10.105.71.164 80' May 23 23:59:38.156: INFO: stderr: "I0523 23:59:38.065447 1315 log.go:172] (0xc000a66630) (0xc000309ae0) Create stream\nI0523 23:59:38.065504 1315 log.go:172] (0xc000a66630) (0xc000309ae0) Stream added, broadcasting: 1\nI0523 23:59:38.067562 1315 log.go:172] (0xc000a66630) Reply frame received for 1\nI0523 23:59:38.067618 1315 log.go:172] (0xc000a66630) (0xc0003bcf00) Create stream\nI0523 23:59:38.067641 1315 log.go:172] (0xc000a66630) (0xc0003bcf00) Stream added, broadcasting: 3\nI0523 23:59:38.068582 1315 log.go:172] (0xc000a66630) Reply frame received for 3\nI0523 23:59:38.068613 1315 log.go:172] (0xc000a66630) (0xc0002ec0a0) Create stream\nI0523 23:59:38.068624 1315 log.go:172] (0xc000a66630) (0xc0002ec0a0) Stream added, broadcasting: 5\nI0523 23:59:38.069814 1315 log.go:172] (0xc000a66630) Reply frame received for 5\nI0523 23:59:38.148861 1315 log.go:172] (0xc000a66630) Data frame received for 3\nI0523 23:59:38.148912 1315 log.go:172] (0xc0003bcf00) (3) Data frame handling\nI0523 23:59:38.148948 1315 log.go:172] (0xc000a66630) Data frame received for 5\nI0523 23:59:38.148978 1315 log.go:172] (0xc0002ec0a0) (5) Data frame handling\nI0523 23:59:38.148996 1315 log.go:172] (0xc0002ec0a0) (5) Data frame sent\nI0523 23:59:38.149008 1315 log.go:172] (0xc000a66630) Data frame received for 5\nI0523 23:59:38.149018 1315 log.go:172] (0xc0002ec0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.71.164 80\nConnection to 10.105.71.164 80 port [tcp/http] succeeded!\nI0523 23:59:38.150343 1315 log.go:172] (0xc000a66630) Data frame received for 1\nI0523 23:59:38.150388 1315 log.go:172] (0xc000309ae0) (1) Data frame handling\nI0523 23:59:38.150420 1315 log.go:172] (0xc000309ae0) (1) Data frame sent\nI0523 23:59:38.150447 1315 log.go:172] (0xc000a66630) (0xc000309ae0) Stream removed, broadcasting: 1\nI0523 23:59:38.150477 1315 log.go:172] (0xc000a66630) Go away received\nI0523 23:59:38.150814 1315 log.go:172] (0xc000a66630) (0xc000309ae0) Stream removed, broadcasting: 1\nI0523 23:59:38.150842 1315 log.go:172] (0xc000a66630) (0xc0003bcf00) Stream removed, broadcasting: 3\nI0523 23:59:38.150866 1315 log.go:172] (0xc000a66630) (0xc0002ec0a0) Stream removed, broadcasting: 5\n" May 23 23:59:38.156: INFO: stdout: "" May 23 23:59:38.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4084 execpod5sc58 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30215' May 23 23:59:38.375: INFO: stderr: "I0523 23:59:38.288809 1334 log.go:172] (0xc000b5fad0) (0xc0004103c0) Create stream\nI0523 23:59:38.288856 1334 log.go:172] (0xc000b5fad0) (0xc0004103c0) Stream added, broadcasting: 1\nI0523 23:59:38.291682 1334 log.go:172] (0xc000b5fad0) Reply frame received for 1\nI0523 23:59:38.291725 1334 log.go:172] (0xc000b5fad0) (0xc000410a00) Create stream\nI0523 23:59:38.291739 1334 log.go:172] (0xc000b5fad0) (0xc000410a00) Stream added, broadcasting: 3\nI0523 23:59:38.292881 1334 log.go:172] (0xc000b5fad0) Reply frame received for 3\nI0523 23:59:38.292918 1334 log.go:172] (0xc000b5fad0) (0xc0001546e0) Create stream\nI0523 23:59:38.292934 1334 log.go:172] (0xc000b5fad0) (0xc0001546e0) Stream added, broadcasting: 5\nI0523 23:59:38.294300 1334 log.go:172] (0xc000b5fad0) Reply frame received for 5\nI0523 23:59:38.368702 1334 log.go:172] (0xc000b5fad0) Data frame received for 5\nI0523 23:59:38.368751 1334 log.go:172] (0xc0001546e0) (5) Data frame handling\nI0523 23:59:38.368765 1334 log.go:172] (0xc0001546e0) (5) Data frame sent\nI0523 23:59:38.368775 1334 log.go:172] (0xc000b5fad0) Data frame received for 5\nI0523 23:59:38.368784 1334 log.go:172] (0xc0001546e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30215\nConnection to 172.17.0.13 30215 port [tcp/30215] succeeded!\nI0523 23:59:38.368810 1334 log.go:172] (0xc000b5fad0) Data frame received for 3\nI0523 23:59:38.368822 1334 log.go:172] (0xc000410a00) (3) Data frame handling\nI0523 23:59:38.370449 1334 log.go:172] (0xc000b5fad0) Data frame received for 1\nI0523 23:59:38.370473 1334 log.go:172] (0xc0004103c0) (1) Data frame handling\nI0523 23:59:38.370488 1334 log.go:172] (0xc0004103c0) (1) Data frame sent\nI0523 23:59:38.370502 1334 log.go:172] (0xc000b5fad0) (0xc0004103c0) Stream removed, broadcasting: 1\nI0523 23:59:38.370589 1334 log.go:172] (0xc000b5fad0) Go away received\nI0523 23:59:38.370774 1334 log.go:172] (0xc000b5fad0) (0xc0004103c0) Stream removed, broadcasting: 1\nI0523 23:59:38.370790 1334 log.go:172] (0xc000b5fad0) (0xc000410a00) Stream removed, broadcasting: 3\nI0523 23:59:38.370797 1334 log.go:172] (0xc000b5fad0) (0xc0001546e0) Stream removed, broadcasting: 5\n" May 23 23:59:38.375: INFO: stdout: "" May 23 23:59:38.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-4084 execpod5sc58 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30215' May 23 23:59:38.587: INFO: stderr: "I0523 23:59:38.518478 1354 log.go:172] (0xc0004ea0b0) (0xc00014efa0) Create stream\nI0523 23:59:38.518558 1354 log.go:172] (0xc0004ea0b0) (0xc00014efa0) Stream added, broadcasting: 1\nI0523 23:59:38.521587 1354 log.go:172] (0xc0004ea0b0) Reply frame received for 1\nI0523 23:59:38.521624 1354 log.go:172] (0xc0004ea0b0) (0xc00024e320) Create stream\nI0523 23:59:38.521636 1354 log.go:172] (0xc0004ea0b0) (0xc00024e320) Stream added, broadcasting: 3\nI0523 23:59:38.522735 1354 log.go:172] (0xc0004ea0b0) Reply frame received for 3\nI0523 23:59:38.522766 1354 log.go:172] (0xc0004ea0b0) (0xc0005f0d20) Create stream\nI0523 23:59:38.522776 1354 log.go:172] (0xc0004ea0b0) (0xc0005f0d20) Stream added, broadcasting: 5\nI0523 23:59:38.523859 1354 log.go:172] (0xc0004ea0b0) Reply frame received for 5\nI0523 23:59:38.578268 1354 log.go:172] (0xc0004ea0b0) Data frame received for 5\nI0523 23:59:38.578311 1354 log.go:172] (0xc0005f0d20) (5) Data frame handling\nI0523 23:59:38.578349 1354 log.go:172] (0xc0005f0d20) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 30215\nConnection to 172.17.0.12 30215 port [tcp/30215] succeeded!\nI0523 23:59:38.578377 1354 log.go:172] (0xc0004ea0b0) Data frame received for 5\nI0523 23:59:38.578396 1354 log.go:172] (0xc0005f0d20) (5) Data frame handling\nI0523 23:59:38.578566 1354 log.go:172] (0xc0004ea0b0) Data frame received for 3\nI0523 23:59:38.578589 1354 log.go:172] (0xc00024e320) (3) Data frame handling\nI0523 23:59:38.580072 1354 log.go:172] (0xc0004ea0b0) Data frame received for 1\nI0523 23:59:38.580107 1354 log.go:172] (0xc00014efa0) (1) Data frame handling\nI0523 23:59:38.580153 1354 log.go:172] (0xc00014efa0) (1) Data frame sent\nI0523 23:59:38.580176 1354 log.go:172] (0xc0004ea0b0) (0xc00014efa0) Stream removed, broadcasting: 1\nI0523 23:59:38.580191 1354 log.go:172] (0xc0004ea0b0) Go away received\nI0523 23:59:38.580580 1354 log.go:172] (0xc0004ea0b0) (0xc00014efa0) Stream removed, broadcasting: 1\nI0523 23:59:38.580599 1354 log.go:172] (0xc0004ea0b0) (0xc00024e320) Stream removed, broadcasting: 3\nI0523 23:59:38.580609 1354 log.go:172] (0xc0004ea0b0) (0xc0005f0d20) Stream removed, broadcasting: 5\n" May 23 23:59:38.587: INFO: stdout: "" May 23 23:59:38.587: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:59:38.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4084" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.829 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":288,"completed":103,"skipped":1732,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:59:38.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 23 23:59:38.776: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 23 23:59:40.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6057" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":288,"completed":104,"skipped":1736,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 23 23:59:40.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-pkk4 STEP: Creating a pod to test atomic-volume-subpath May 23 23:59:40.135: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pkk4" in namespace "subpath-4225" to be "Succeeded or Failed" May 23 23:59:40.139: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.547ms May 23 23:59:42.142: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007862618s May 23 23:59:44.147: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Running", Reason="", readiness=true. Elapsed: 4.011916122s May 23 23:59:46.152: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Running", Reason="", readiness=true. Elapsed: 6.016925326s May 23 23:59:48.156: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Running", Reason="", readiness=true. Elapsed: 8.021378885s May 23 23:59:50.161: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Running", Reason="", readiness=true. Elapsed: 10.025976176s May 23 23:59:52.165: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Running", Reason="", readiness=true. Elapsed: 12.030778094s May 23 23:59:54.170: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Running", Reason="", readiness=true. Elapsed: 14.035314959s May 23 23:59:56.175: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Running", Reason="", readiness=true. Elapsed: 16.04000797s May 23 23:59:58.179: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Running", Reason="", readiness=true. Elapsed: 18.044488963s May 24 00:00:00.183: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Running", Reason="", readiness=true. Elapsed: 20.048272762s May 24 00:00:02.188: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Running", Reason="", readiness=true. Elapsed: 22.053252639s May 24 00:00:04.192: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Running", Reason="", readiness=true. Elapsed: 24.05717025s May 24 00:00:06.196: INFO: Pod "pod-subpath-test-configmap-pkk4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.061640093s STEP: Saw pod success May 24 00:00:06.196: INFO: Pod "pod-subpath-test-configmap-pkk4" satisfied condition "Succeeded or Failed" May 24 00:00:06.199: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-pkk4 container test-container-subpath-configmap-pkk4: STEP: delete the pod May 24 00:00:06.237: INFO: Waiting for pod pod-subpath-test-configmap-pkk4 to disappear May 24 00:00:06.242: INFO: Pod pod-subpath-test-configmap-pkk4 no longer exists STEP: Deleting pod pod-subpath-test-configmap-pkk4 May 24 00:00:06.242: INFO: Deleting pod "pod-subpath-test-configmap-pkk4" in namespace "subpath-4225" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:00:06.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4225" for this suite. • [SLOW TEST:26.217 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":288,"completed":105,"skipped":1756,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:00:06.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:00:06.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2616" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":288,"completed":106,"skipped":1773,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:00:06.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8446.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8446.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8446.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8446.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 00:00:12.557: INFO: DNS probes using dns-test-01ff42e1-23dc-4018-9a2c-223f58f54c75 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8446.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8446.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8446.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8446.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 00:00:18.643: INFO: File wheezy_udp@dns-test-service-3.dns-8446.svc.cluster.local from pod dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 00:00:18.648: INFO: File jessie_udp@dns-test-service-3.dns-8446.svc.cluster.local from pod dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 00:00:18.648: INFO: Lookups using dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 failed for: [wheezy_udp@dns-test-service-3.dns-8446.svc.cluster.local jessie_udp@dns-test-service-3.dns-8446.svc.cluster.local] May 24 00:00:23.654: INFO: File wheezy_udp@dns-test-service-3.dns-8446.svc.cluster.local from pod dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 00:00:23.657: INFO: File jessie_udp@dns-test-service-3.dns-8446.svc.cluster.local from pod dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 00:00:23.657: INFO: Lookups using dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 failed for: [wheezy_udp@dns-test-service-3.dns-8446.svc.cluster.local jessie_udp@dns-test-service-3.dns-8446.svc.cluster.local] May 24 00:00:28.653: INFO: File wheezy_udp@dns-test-service-3.dns-8446.svc.cluster.local from pod dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 00:00:28.658: INFO: File jessie_udp@dns-test-service-3.dns-8446.svc.cluster.local from pod dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 00:00:28.658: INFO: Lookups using dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 failed for: [wheezy_udp@dns-test-service-3.dns-8446.svc.cluster.local jessie_udp@dns-test-service-3.dns-8446.svc.cluster.local] May 24 00:00:33.668: INFO: File wheezy_udp@dns-test-service-3.dns-8446.svc.cluster.local from pod dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 00:00:33.672: INFO: File jessie_udp@dns-test-service-3.dns-8446.svc.cluster.local from pod dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 00:00:33.672: INFO: Lookups using dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 failed for: [wheezy_udp@dns-test-service-3.dns-8446.svc.cluster.local jessie_udp@dns-test-service-3.dns-8446.svc.cluster.local] May 24 00:00:38.653: INFO: File wheezy_udp@dns-test-service-3.dns-8446.svc.cluster.local from pod dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 00:00:38.657: INFO: File jessie_udp@dns-test-service-3.dns-8446.svc.cluster.local from pod dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 contains 'foo.example.com. ' instead of 'bar.example.com.' May 24 00:00:38.657: INFO: Lookups using dns-8446/dns-test-76e6ee84-10db-4047-84fc-f216d2653970 failed for: [wheezy_udp@dns-test-service-3.dns-8446.svc.cluster.local jessie_udp@dns-test-service-3.dns-8446.svc.cluster.local] May 24 00:00:43.658: INFO: DNS probes using dns-test-76e6ee84-10db-4047-84fc-f216d2653970 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8446.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8446.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8446.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8446.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 00:00:50.289: INFO: DNS probes using dns-test-726aaf27-71e8-47ab-8a39-2dec351ca452 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:00:50.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8446" for this suite. • [SLOW TEST:43.985 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":288,"completed":107,"skipped":1817,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:00:50.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:00:51.937: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:00:53.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725875251, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725875251, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725875252, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725875251, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:00:57.000: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:00:57.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2977" for this suite. STEP: Destroying namespace "webhook-2977-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.849 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":288,"completed":108,"skipped":1871,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:00:57.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:00:57.358: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:01:01.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-158" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":288,"completed":109,"skipped":1875,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:01:01.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 24 00:01:01.644: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1561 /api/v1/namespaces/watch-1561/configmaps/e2e-watch-test-resource-version df8ca8aa-86fd-447c-bf27-621c7532af73 7148836 0 2020-05-24 00:01:01 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-24 00:01:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 00:01:01.645: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1561 /api/v1/namespaces/watch-1561/configmaps/e2e-watch-test-resource-version df8ca8aa-86fd-447c-bf27-621c7532af73 7148837 0 2020-05-24 00:01:01 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-24 00:01:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:01:01.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1561" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":288,"completed":110,"skipped":1880,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:01:01.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:03:01.754: INFO: Deleting pod "var-expansion-ffb61cbc-d20e-4c8e-b754-514b5594b85d" in namespace "var-expansion-1955" May 24 00:03:01.760: INFO: Wait up to 5m0s for pod "var-expansion-ffb61cbc-d20e-4c8e-b754-514b5594b85d" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:03:03.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1955" for this suite. • [SLOW TEST:122.157 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":288,"completed":111,"skipped":1882,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:03:03.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:04:03.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9395" for this suite. • [SLOW TEST:60.141 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":288,"completed":112,"skipped":1887,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:04:03.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium May 24 00:04:04.055: INFO: Waiting up to 5m0s for pod "pod-bdebc9d9-83ce-47c0-bb69-5ee35bb2e807" in namespace "emptydir-3392" to be "Succeeded or Failed" May 24 00:04:04.076: INFO: Pod "pod-bdebc9d9-83ce-47c0-bb69-5ee35bb2e807": Phase="Pending", Reason="", readiness=false. Elapsed: 21.080733ms May 24 00:04:06.305: INFO: Pod "pod-bdebc9d9-83ce-47c0-bb69-5ee35bb2e807": Phase="Pending", Reason="", readiness=false. Elapsed: 2.249639155s May 24 00:04:08.309: INFO: Pod "pod-bdebc9d9-83ce-47c0-bb69-5ee35bb2e807": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.254047365s STEP: Saw pod success May 24 00:04:08.309: INFO: Pod "pod-bdebc9d9-83ce-47c0-bb69-5ee35bb2e807" satisfied condition "Succeeded or Failed" May 24 00:04:08.313: INFO: Trying to get logs from node latest-worker2 pod pod-bdebc9d9-83ce-47c0-bb69-5ee35bb2e807 container test-container: STEP: delete the pod May 24 00:04:08.415: INFO: Waiting for pod pod-bdebc9d9-83ce-47c0-bb69-5ee35bb2e807 to disappear May 24 00:04:08.432: INFO: Pod pod-bdebc9d9-83ce-47c0-bb69-5ee35bb2e807 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:04:08.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3392" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":113,"skipped":1892,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:04:08.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 24 00:04:08.578: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:08.582: INFO: Number of nodes with available pods: 0 May 24 00:04:08.582: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:09.590: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:09.593: INFO: Number of nodes with available pods: 0 May 24 00:04:09.593: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:10.717: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:10.721: INFO: Number of nodes with available pods: 0 May 24 00:04:10.721: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:11.646: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:11.795: INFO: Number of nodes with available pods: 0 May 24 00:04:11.795: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:12.611: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:12.615: INFO: Number of nodes with available pods: 1 May 24 00:04:12.615: INFO: Node latest-worker2 is running more than one daemon pod May 24 00:04:13.588: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:13.628: INFO: Number of nodes with available pods: 2 May 24 00:04:13.628: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 24 00:04:13.666: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:13.669: INFO: Number of nodes with available pods: 1 May 24 00:04:13.669: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:14.674: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:14.678: INFO: Number of nodes with available pods: 1 May 24 00:04:14.678: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:15.675: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:15.679: INFO: Number of nodes with available pods: 1 May 24 00:04:15.679: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:16.674: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:16.693: INFO: Number of nodes with available pods: 1 May 24 00:04:16.693: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:17.698: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:17.703: INFO: Number of nodes with available pods: 1 May 24 00:04:17.703: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:18.707: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:18.710: INFO: Number of nodes with available pods: 1 May 24 00:04:18.710: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:19.674: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:19.678: INFO: Number of nodes with available pods: 1 May 24 00:04:19.678: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:20.675: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:20.678: INFO: Number of nodes with available pods: 2 May 24 00:04:20.678: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2127, will wait for the garbage collector to delete the pods May 24 00:04:20.742: INFO: Deleting DaemonSet.extensions daemon-set took: 7.460473ms May 24 00:04:21.042: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.32641ms May 24 00:04:34.950: INFO: Number of nodes with available pods: 0 May 24 00:04:34.950: INFO: Number of running nodes: 0, number of available pods: 0 May 24 00:04:34.952: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2127/daemonsets","resourceVersion":"7149645"},"items":null} May 24 00:04:34.954: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2127/pods","resourceVersion":"7149645"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:04:34.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2127" for this suite. • [SLOW TEST:26.528 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":288,"completed":114,"skipped":1920,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:04:34.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:04:35.573: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:04:37.581: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725875475, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725875475, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725875475, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725875475, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:04:40.670: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:04:40.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5485" for this suite. STEP: Destroying namespace "webhook-5485-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.976 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":288,"completed":115,"skipped":1932,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:04:40.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:04:41.009: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 24 00:04:41.015: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:41.070: INFO: Number of nodes with available pods: 0 May 24 00:04:41.070: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:42.444: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:42.448: INFO: Number of nodes with available pods: 0 May 24 00:04:42.448: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:43.357: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:43.361: INFO: Number of nodes with available pods: 0 May 24 00:04:43.361: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:44.533: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:44.536: INFO: Number of nodes with available pods: 0 May 24 00:04:44.536: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:45.076: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:45.079: INFO: Number of nodes with available pods: 0 May 24 00:04:45.079: INFO: Node latest-worker is running more than one daemon pod May 24 00:04:46.075: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:46.078: INFO: Number of nodes with available pods: 2 May 24 00:04:46.078: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 24 00:04:46.144: INFO: Wrong image for pod: daemon-set-hwwww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:46.144: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:46.191: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:47.377: INFO: Wrong image for pod: daemon-set-hwwww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:47.377: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:47.393: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:48.197: INFO: Wrong image for pod: daemon-set-hwwww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:48.197: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:48.202: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:49.196: INFO: Wrong image for pod: daemon-set-hwwww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:49.196: INFO: Pod daemon-set-hwwww is not available May 24 00:04:49.196: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:49.201: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:50.196: INFO: Wrong image for pod: daemon-set-hwwww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:50.196: INFO: Pod daemon-set-hwwww is not available May 24 00:04:50.196: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:50.201: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:51.195: INFO: Wrong image for pod: daemon-set-hwwww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:51.195: INFO: Pod daemon-set-hwwww is not available May 24 00:04:51.195: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:51.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:52.197: INFO: Wrong image for pod: daemon-set-hwwww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:52.197: INFO: Pod daemon-set-hwwww is not available May 24 00:04:52.197: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:52.201: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:53.196: INFO: Wrong image for pod: daemon-set-hwwww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:53.196: INFO: Pod daemon-set-hwwww is not available May 24 00:04:53.196: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:53.201: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:54.196: INFO: Wrong image for pod: daemon-set-hwwww. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:54.196: INFO: Pod daemon-set-hwwww is not available May 24 00:04:54.196: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:54.200: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:55.196: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:55.196: INFO: Pod daemon-set-xbc2w is not available May 24 00:04:55.200: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:56.195: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:56.195: INFO: Pod daemon-set-xbc2w is not available May 24 00:04:56.199: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:57.195: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:57.195: INFO: Pod daemon-set-xbc2w is not available May 24 00:04:57.198: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:58.196: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:58.200: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:04:59.195: INFO: Wrong image for pod: daemon-set-kq42r. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13, got: docker.io/library/httpd:2.4.38-alpine. May 24 00:04:59.195: INFO: Pod daemon-set-kq42r is not available May 24 00:04:59.200: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:05:00.196: INFO: Pod daemon-set-r9pmt is not available May 24 00:05:00.200: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 24 00:05:00.204: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:05:00.233: INFO: Number of nodes with available pods: 1 May 24 00:05:00.233: INFO: Node latest-worker2 is running more than one daemon pod May 24 00:05:01.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:05:01.241: INFO: Number of nodes with available pods: 1 May 24 00:05:01.241: INFO: Node latest-worker2 is running more than one daemon pod May 24 00:05:02.239: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:05:02.243: INFO: Number of nodes with available pods: 1 May 24 00:05:02.243: INFO: Node latest-worker2 is running more than one daemon pod May 24 00:05:03.237: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:05:03.240: INFO: Number of nodes with available pods: 2 May 24 00:05:03.240: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4494, will wait for the garbage collector to delete the pods May 24 00:05:03.309: INFO: Deleting DaemonSet.extensions daemon-set took: 6.314628ms May 24 00:05:03.409: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.209341ms May 24 00:05:15.313: INFO: Number of nodes with available pods: 0 May 24 00:05:15.313: INFO: Number of running nodes: 0, number of available pods: 0 May 24 00:05:15.316: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4494/daemonsets","resourceVersion":"7149937"},"items":null} May 24 00:05:15.318: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4494/pods","resourceVersion":"7149937"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:05:15.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4494" for this suite. • [SLOW TEST:34.391 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":288,"completed":116,"skipped":1957,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:05:15.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 24 00:05:15.425: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20148b8d-0c4f-4890-8f10-a7d8d6e7c5cc" in namespace "projected-9907" to be "Succeeded or Failed" May 24 00:05:15.429: INFO: Pod "downwardapi-volume-20148b8d-0c4f-4890-8f10-a7d8d6e7c5cc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.86398ms May 24 00:05:17.448: INFO: Pod "downwardapi-volume-20148b8d-0c4f-4890-8f10-a7d8d6e7c5cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023051319s May 24 00:05:19.451: INFO: Pod "downwardapi-volume-20148b8d-0c4f-4890-8f10-a7d8d6e7c5cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026848849s STEP: Saw pod success May 24 00:05:19.451: INFO: Pod "downwardapi-volume-20148b8d-0c4f-4890-8f10-a7d8d6e7c5cc" satisfied condition "Succeeded or Failed" May 24 00:05:19.455: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-20148b8d-0c4f-4890-8f10-a7d8d6e7c5cc container client-container: STEP: delete the pod May 24 00:05:19.507: INFO: Waiting for pod downwardapi-volume-20148b8d-0c4f-4890-8f10-a7d8d6e7c5cc to disappear May 24 00:05:19.515: INFO: Pod downwardapi-volume-20148b8d-0c4f-4890-8f10-a7d8d6e7c5cc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:05:19.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9907" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":117,"skipped":1985,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:05:19.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 24 00:05:20.279: INFO: Pod name wrapped-volume-race-7e061d0a-ca38-44e9-ac12-e49e784b4c0a: Found 0 pods out of 5 May 24 00:05:25.297: INFO: Pod name wrapped-volume-race-7e061d0a-ca38-44e9-ac12-e49e784b4c0a: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7e061d0a-ca38-44e9-ac12-e49e784b4c0a in namespace emptydir-wrapper-4437, will wait for the garbage collector to delete the pods May 24 00:05:41.395: INFO: Deleting ReplicationController wrapped-volume-race-7e061d0a-ca38-44e9-ac12-e49e784b4c0a took: 8.527893ms May 24 00:05:41.795: INFO: Terminating ReplicationController wrapped-volume-race-7e061d0a-ca38-44e9-ac12-e49e784b4c0a pods took: 400.296499ms STEP: Creating RC which spawns configmap-volume pods May 24 00:05:55.634: INFO: Pod name wrapped-volume-race-c6279eba-abbf-447b-90dd-bc2b934faef5: Found 0 pods out of 5 May 24 00:06:00.643: INFO: Pod name wrapped-volume-race-c6279eba-abbf-447b-90dd-bc2b934faef5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c6279eba-abbf-447b-90dd-bc2b934faef5 in namespace emptydir-wrapper-4437, will wait for the garbage collector to delete the pods May 24 00:06:14.750: INFO: Deleting ReplicationController wrapped-volume-race-c6279eba-abbf-447b-90dd-bc2b934faef5 took: 6.866472ms May 24 00:06:15.050: INFO: Terminating ReplicationController wrapped-volume-race-c6279eba-abbf-447b-90dd-bc2b934faef5 pods took: 300.217248ms STEP: Creating RC which spawns configmap-volume pods May 24 00:06:25.200: INFO: Pod name wrapped-volume-race-9752776e-fc18-4d65-b800-6d585c1efc33: Found 0 pods out of 5 May 24 00:06:30.208: INFO: Pod name wrapped-volume-race-9752776e-fc18-4d65-b800-6d585c1efc33: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9752776e-fc18-4d65-b800-6d585c1efc33 in namespace emptydir-wrapper-4437, will wait for the garbage collector to delete the pods May 24 00:06:46.472: INFO: Deleting ReplicationController wrapped-volume-race-9752776e-fc18-4d65-b800-6d585c1efc33 took: 9.737261ms May 24 00:06:46.873: INFO: Terminating ReplicationController wrapped-volume-race-9752776e-fc18-4d65-b800-6d585c1efc33 pods took: 400.351241ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:06:55.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4437" for this suite. • [SLOW TEST:96.244 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":288,"completed":118,"skipped":1988,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:06:55.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:06:59.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-779" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":288,"completed":119,"skipped":2012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:06:59.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-139, will wait for the garbage collector to delete the pods May 24 00:07:06.314: INFO: Deleting Job.batch foo took: 194.830192ms May 24 00:07:06.814: INFO: Terminating Job.batch foo pods took: 500.307603ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:07:45.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-139" for this suite. • [SLOW TEST:45.465 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":288,"completed":120,"skipped":2069,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:07:45.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:07:52.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1163" for this suite. • [SLOW TEST:7.063 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":288,"completed":121,"skipped":2139,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:07:52.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:07:53.111: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:07:55.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725875673, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725875673, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725875673, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725875673, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:07:58.159: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:07:58.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4431" for this suite. STEP: Destroying namespace "webhook-4431-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.918 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":288,"completed":122,"skipped":2146,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:07:58.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8943 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8943 STEP: Creating statefulset with conflicting port in namespace statefulset-8943 STEP: Waiting until pod test-pod will start running in namespace statefulset-8943 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8943 May 24 00:08:04.685: INFO: Observed stateful pod in namespace: statefulset-8943, name: ss-0, uid: e3b4c299-782e-45f7-a329-a992b1753151, status phase: Pending. Waiting for statefulset controller to delete. May 24 00:08:04.954: INFO: Observed stateful pod in namespace: statefulset-8943, name: ss-0, uid: e3b4c299-782e-45f7-a329-a992b1753151, status phase: Failed. Waiting for statefulset controller to delete. May 24 00:08:04.976: INFO: Observed stateful pod in namespace: statefulset-8943, name: ss-0, uid: e3b4c299-782e-45f7-a329-a992b1753151, status phase: Failed. Waiting for statefulset controller to delete. May 24 00:08:04.987: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8943 STEP: Removing pod with conflicting port in namespace statefulset-8943 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8943 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 24 00:08:11.125: INFO: Deleting all statefulset in ns statefulset-8943 May 24 00:08:11.128: INFO: Scaling statefulset ss to 0 May 24 00:08:31.141: INFO: Waiting for statefulset status.replicas updated to 0 May 24 00:08:31.145: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:08:31.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8943" for this suite. • [SLOW TEST:32.848 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":288,"completed":123,"skipped":2194,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:08:31.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-1b11389e-aff3-46ed-bd32-2c0d74ac3328 in namespace container-probe-9570 May 24 00:08:35.267: INFO: Started pod liveness-1b11389e-aff3-46ed-bd32-2c0d74ac3328 in namespace container-probe-9570 STEP: checking the pod's current state and verifying that restartCount is present May 24 00:08:35.271: INFO: Initial restart count of pod liveness-1b11389e-aff3-46ed-bd32-2c0d74ac3328 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:12:36.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9570" for this suite. • [SLOW TEST:245.335 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":288,"completed":124,"skipped":2215,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:12:36.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:12:37.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6581" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":288,"completed":125,"skipped":2224,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:12:37.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 24 00:12:37.181: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:12:44.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7768" for this suite. • [SLOW TEST:7.972 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":288,"completed":126,"skipped":2234,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:12:45.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2325.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2325.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2325.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2325.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2325.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2325.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2325.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2325.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2325.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2325.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2325.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 227.208.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.208.227_udp@PTR;check="$$(dig +tcp +noall +answer +search 227.208.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.208.227_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2325.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2325.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2325.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2325.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2325.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2325.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2325.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2325.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2325.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2325.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2325.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 227.208.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.208.227_udp@PTR;check="$$(dig +tcp +noall +answer +search 227.208.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.208.227_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 00:12:53.679: INFO: Unable to read wheezy_udp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:53.683: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:53.685: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:53.688: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:53.711: INFO: Unable to read jessie_udp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:53.714: INFO: Unable to read jessie_tcp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:53.716: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:53.718: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:53.733: INFO: Lookups using dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40 failed for: [wheezy_udp@dns-test-service.dns-2325.svc.cluster.local wheezy_tcp@dns-test-service.dns-2325.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local jessie_udp@dns-test-service.dns-2325.svc.cluster.local jessie_tcp@dns-test-service.dns-2325.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local] May 24 00:12:58.739: INFO: Unable to read wheezy_udp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:58.744: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:58.747: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:58.749: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:58.768: INFO: Unable to read jessie_udp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:58.771: INFO: Unable to read jessie_tcp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:58.774: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:58.777: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:12:58.794: INFO: Lookups using dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40 failed for: [wheezy_udp@dns-test-service.dns-2325.svc.cluster.local wheezy_tcp@dns-test-service.dns-2325.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local jessie_udp@dns-test-service.dns-2325.svc.cluster.local jessie_tcp@dns-test-service.dns-2325.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local] May 24 00:13:03.743: INFO: Unable to read wheezy_udp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:03.746: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:03.750: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:03.752: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:03.774: INFO: Unable to read jessie_udp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:03.777: INFO: Unable to read jessie_tcp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:03.779: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:03.782: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:03.800: INFO: Lookups using dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40 failed for: [wheezy_udp@dns-test-service.dns-2325.svc.cluster.local wheezy_tcp@dns-test-service.dns-2325.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local jessie_udp@dns-test-service.dns-2325.svc.cluster.local jessie_tcp@dns-test-service.dns-2325.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local] May 24 00:13:08.738: INFO: Unable to read wheezy_udp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:08.742: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:08.744: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:08.747: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:08.765: INFO: Unable to read jessie_udp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:08.767: INFO: Unable to read jessie_tcp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:08.770: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:08.772: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:08.788: INFO: Unable to read 10.99.208.227_tcp@PTR from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: Get https://172.30.12.66:32773/api/v1/namespaces/dns-2325/pods/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40/proxy/results/10.99.208.227_tcp@PTR: stream error: stream ID 5455; INTERNAL_ERROR May 24 00:13:08.788: INFO: Lookups using dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40 failed for: [wheezy_udp@dns-test-service.dns-2325.svc.cluster.local wheezy_tcp@dns-test-service.dns-2325.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local jessie_udp@dns-test-service.dns-2325.svc.cluster.local jessie_tcp@dns-test-service.dns-2325.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local 10.99.208.227_tcp@PTR] May 24 00:13:13.738: INFO: Unable to read wheezy_udp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:13.740: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:13.743: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:13.746: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:13.761: INFO: Unable to read jessie_udp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:13.764: INFO: Unable to read jessie_tcp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:13.766: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:13.769: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:13.842: INFO: Lookups using dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40 failed for: [wheezy_udp@dns-test-service.dns-2325.svc.cluster.local wheezy_tcp@dns-test-service.dns-2325.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local jessie_udp@dns-test-service.dns-2325.svc.cluster.local jessie_tcp@dns-test-service.dns-2325.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local] May 24 00:13:18.738: INFO: Unable to read wheezy_udp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:18.741: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:18.744: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:18.747: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:18.765: INFO: Unable to read jessie_udp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:18.768: INFO: Unable to read jessie_tcp@dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:18.770: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:18.773: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local from pod dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40: the server could not find the requested resource (get pods dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40) May 24 00:13:18.809: INFO: Lookups using dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40 failed for: [wheezy_udp@dns-test-service.dns-2325.svc.cluster.local wheezy_tcp@dns-test-service.dns-2325.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local jessie_udp@dns-test-service.dns-2325.svc.cluster.local jessie_tcp@dns-test-service.dns-2325.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2325.svc.cluster.local] May 24 00:13:23.792: INFO: DNS probes using dns-2325/dns-test-4ff7bc99-504f-470d-89cc-36394f8cee40 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:13:24.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2325" for this suite. • [SLOW TEST:39.570 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":288,"completed":127,"skipped":2241,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:13:24.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 24 00:13:24.740: INFO: Created pod &Pod{ObjectMeta:{dns-855 dns-855 /api/v1/namespaces/dns-855/pods/dns-855 4f504053-e5a7-47bb-b59a-d0358e3527fe 7152687 0 2020-05-24 00:13:24 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-24 00:13:24 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pgx5v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pgx5v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pgx5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:13:24.743: INFO: The status of Pod dns-855 is Pending, waiting for it to be Running (with Ready = true) May 24 00:13:26.889: INFO: The status of Pod dns-855 is Pending, waiting for it to be Running (with Ready = true) May 24 00:13:28.747: INFO: The status of Pod dns-855 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 24 00:13:28.747: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-855 PodName:dns-855 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:13:28.747: INFO: >>> kubeConfig: /root/.kube/config I0524 00:13:28.786601 7 log.go:172] (0xc00291c160) (0xc0023ac280) Create stream I0524 00:13:28.786637 7 log.go:172] (0xc00291c160) (0xc0023ac280) Stream added, broadcasting: 1 I0524 00:13:28.789437 7 log.go:172] (0xc00291c160) Reply frame received for 1 I0524 00:13:28.789491 7 log.go:172] (0xc00291c160) (0xc002442000) Create stream I0524 00:13:28.789512 7 log.go:172] (0xc00291c160) (0xc002442000) Stream added, broadcasting: 3 I0524 00:13:28.790461 7 log.go:172] (0xc00291c160) Reply frame received for 3 I0524 00:13:28.790496 7 log.go:172] (0xc00291c160) (0xc0023ac320) Create stream I0524 00:13:28.790508 7 log.go:172] (0xc00291c160) (0xc0023ac320) Stream added, broadcasting: 5 I0524 00:13:28.791488 7 log.go:172] (0xc00291c160) Reply frame received for 5 I0524 00:13:28.908007 7 log.go:172] (0xc00291c160) Data frame received for 3 I0524 00:13:28.908037 7 log.go:172] (0xc002442000) (3) Data frame handling I0524 00:13:28.908049 7 log.go:172] (0xc002442000) (3) Data frame sent I0524 00:13:28.909565 7 log.go:172] (0xc00291c160) Data frame received for 3 I0524 00:13:28.909583 7 log.go:172] (0xc002442000) (3) Data frame handling I0524 00:13:28.909617 7 log.go:172] (0xc00291c160) Data frame received for 5 I0524 00:13:28.909647 7 log.go:172] (0xc0023ac320) (5) Data frame handling I0524 00:13:28.910989 7 log.go:172] (0xc00291c160) Data frame received for 1 I0524 00:13:28.911013 7 log.go:172] (0xc0023ac280) (1) Data frame handling I0524 00:13:28.911032 7 log.go:172] (0xc0023ac280) (1) Data frame sent I0524 00:13:28.911103 7 log.go:172] (0xc00291c160) (0xc0023ac280) Stream removed, broadcasting: 1 I0524 00:13:28.911137 7 log.go:172] (0xc00291c160) Go away received I0524 00:13:28.911214 7 log.go:172] (0xc00291c160) (0xc0023ac280) Stream removed, broadcasting: 1 I0524 00:13:28.911233 7 log.go:172] (0xc00291c160) (0xc002442000) Stream removed, broadcasting: 3 I0524 00:13:28.911243 7 log.go:172] (0xc00291c160) (0xc0023ac320) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 24 00:13:28.911: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-855 PodName:dns-855 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:13:28.911: INFO: >>> kubeConfig: /root/.kube/config I0524 00:13:28.996178 7 log.go:172] (0xc002c73080) (0xc002442280) Create stream I0524 00:13:28.996214 7 log.go:172] (0xc002c73080) (0xc002442280) Stream added, broadcasting: 1 I0524 00:13:28.998800 7 log.go:172] (0xc002c73080) Reply frame received for 1 I0524 00:13:28.998830 7 log.go:172] (0xc002c73080) (0xc0028c03c0) Create stream I0524 00:13:28.998839 7 log.go:172] (0xc002c73080) (0xc0028c03c0) Stream added, broadcasting: 3 I0524 00:13:28.999626 7 log.go:172] (0xc002c73080) Reply frame received for 3 I0524 00:13:28.999649 7 log.go:172] (0xc002c73080) (0xc0023ac460) Create stream I0524 00:13:28.999657 7 log.go:172] (0xc002c73080) (0xc0023ac460) Stream added, broadcasting: 5 I0524 00:13:29.000428 7 log.go:172] (0xc002c73080) Reply frame received for 5 I0524 00:13:29.070108 7 log.go:172] (0xc002c73080) Data frame received for 3 I0524 00:13:29.070150 7 log.go:172] (0xc0028c03c0) (3) Data frame handling I0524 00:13:29.070185 7 log.go:172] (0xc0028c03c0) (3) Data frame sent I0524 00:13:29.071616 7 log.go:172] (0xc002c73080) Data frame received for 3 I0524 00:13:29.071667 7 log.go:172] (0xc0028c03c0) (3) Data frame handling I0524 00:13:29.071733 7 log.go:172] (0xc002c73080) Data frame received for 5 I0524 00:13:29.071772 7 log.go:172] (0xc0023ac460) (5) Data frame handling I0524 00:13:29.073061 7 log.go:172] (0xc002c73080) Data frame received for 1 I0524 00:13:29.073107 7 log.go:172] (0xc002442280) (1) Data frame handling I0524 00:13:29.073330 7 log.go:172] (0xc002442280) (1) Data frame sent I0524 00:13:29.073348 7 log.go:172] (0xc002c73080) (0xc002442280) Stream removed, broadcasting: 1 I0524 00:13:29.073364 7 log.go:172] (0xc002c73080) Go away received I0524 00:13:29.073632 7 log.go:172] (0xc002c73080) (0xc002442280) Stream removed, broadcasting: 1 I0524 00:13:29.073656 7 log.go:172] (0xc002c73080) (0xc0028c03c0) Stream removed, broadcasting: 3 I0524 00:13:29.073696 7 log.go:172] (0xc002c73080) (0xc0023ac460) Stream removed, broadcasting: 5 May 24 00:13:29.073: INFO: Deleting pod dns-855... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:13:29.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-855" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":288,"completed":128,"skipped":2269,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:13:29.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:13:29.335: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 24 00:13:32.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3550 create -f -' May 24 00:13:37.094: INFO: stderr: "" May 24 00:13:37.094: INFO: stdout: "e2e-test-crd-publish-openapi-3718-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 24 00:13:37.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3550 delete e2e-test-crd-publish-openapi-3718-crds test-foo' May 24 00:13:37.236: INFO: stderr: "" May 24 00:13:37.236: INFO: stdout: "e2e-test-crd-publish-openapi-3718-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 24 00:13:37.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3550 apply -f -' May 24 00:13:38.612: INFO: stderr: "" May 24 00:13:38.612: INFO: stdout: "e2e-test-crd-publish-openapi-3718-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 24 00:13:38.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3550 delete e2e-test-crd-publish-openapi-3718-crds test-foo' May 24 00:13:38.724: INFO: stderr: "" May 24 00:13:38.724: INFO: stdout: "e2e-test-crd-publish-openapi-3718-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 24 00:13:38.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3550 create -f -' May 24 00:13:38.961: INFO: rc: 1 May 24 00:13:38.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3550 apply -f -' May 24 00:13:39.226: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 24 00:13:39.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3550 create -f -' May 24 00:13:39.495: INFO: rc: 1 May 24 00:13:39.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3550 apply -f -' May 24 00:13:39.762: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 24 00:13:39.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3718-crds' May 24 00:13:40.017: INFO: stderr: "" May 24 00:13:40.017: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3718-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 24 00:13:40.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3718-crds.metadata' May 24 00:13:40.273: INFO: stderr: "" May 24 00:13:40.273: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3718-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 24 00:13:40.274: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3718-crds.spec' May 24 00:13:40.551: INFO: stderr: "" May 24 00:13:40.551: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3718-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 24 00:13:40.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3718-crds.spec.bars' May 24 00:13:40.845: INFO: stderr: "" May 24 00:13:40.845: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3718-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 24 00:13:40.846: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3718-crds.spec.bars2' May 24 00:13:41.108: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:13:44.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3550" for this suite. • [SLOW TEST:14.889 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":288,"completed":129,"skipped":2275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:13:44.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:13:44.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1701" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":288,"completed":130,"skipped":2302,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:13:44.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:13:44.770: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:13:46.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876024, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876024, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876024, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876024, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:13:48.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876024, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876024, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876024, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876024, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:13:51.820: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:13:51.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:13:52.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8555" for this suite. STEP: Destroying namespace "webhook-8555-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.873 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":288,"completed":131,"skipped":2308,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:13:53.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 24 00:13:53.099: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:14:01.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2351" for this suite. • [SLOW TEST:8.179 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":288,"completed":132,"skipped":2322,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:14:01.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:14:05.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4469" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":133,"skipped":2334,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:14:05.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 24 00:14:05.461: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88c0a35f-e6f2-4f50-9cfc-fff0311fea8e" in namespace "downward-api-220" to be "Succeeded or Failed" May 24 00:14:05.508: INFO: Pod "downwardapi-volume-88c0a35f-e6f2-4f50-9cfc-fff0311fea8e": Phase="Pending", Reason="", readiness=false. Elapsed: 46.339242ms May 24 00:14:07.511: INFO: Pod "downwardapi-volume-88c0a35f-e6f2-4f50-9cfc-fff0311fea8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049869863s May 24 00:14:09.515: INFO: Pod "downwardapi-volume-88c0a35f-e6f2-4f50-9cfc-fff0311fea8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053709281s STEP: Saw pod success May 24 00:14:09.515: INFO: Pod "downwardapi-volume-88c0a35f-e6f2-4f50-9cfc-fff0311fea8e" satisfied condition "Succeeded or Failed" May 24 00:14:09.520: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-88c0a35f-e6f2-4f50-9cfc-fff0311fea8e container client-container: STEP: delete the pod May 24 00:14:09.626: INFO: Waiting for pod downwardapi-volume-88c0a35f-e6f2-4f50-9cfc-fff0311fea8e to disappear May 24 00:14:09.628: INFO: Pod downwardapi-volume-88c0a35f-e6f2-4f50-9cfc-fff0311fea8e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:14:09.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-220" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":134,"skipped":2341,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:14:09.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 24 00:14:09.710: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8695f50-ae15-4f10-9e10-0831a65c38ac" in namespace "projected-240" to be "Succeeded or Failed" May 24 00:14:09.714: INFO: Pod "downwardapi-volume-f8695f50-ae15-4f10-9e10-0831a65c38ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.936314ms May 24 00:14:11.717: INFO: Pod "downwardapi-volume-f8695f50-ae15-4f10-9e10-0831a65c38ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006925588s May 24 00:14:13.721: INFO: Pod "downwardapi-volume-f8695f50-ae15-4f10-9e10-0831a65c38ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011375417s STEP: Saw pod success May 24 00:14:13.721: INFO: Pod "downwardapi-volume-f8695f50-ae15-4f10-9e10-0831a65c38ac" satisfied condition "Succeeded or Failed" May 24 00:14:13.724: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-f8695f50-ae15-4f10-9e10-0831a65c38ac container client-container: STEP: delete the pod May 24 00:14:13.766: INFO: Waiting for pod downwardapi-volume-f8695f50-ae15-4f10-9e10-0831a65c38ac to disappear May 24 00:14:13.776: INFO: Pod downwardapi-volume-f8695f50-ae15-4f10-9e10-0831a65c38ac no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:14:13.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-240" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":288,"completed":135,"skipped":2344,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:14:13.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:14:14.622: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:14:16.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876054, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876054, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876054, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876054, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:14:19.691: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:14:19.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7857-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:14:20.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3373" for this suite. STEP: Destroying namespace "webhook-3373-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.172 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":288,"completed":136,"skipped":2350,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:14:20.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 24 00:14:21.092: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:14:38.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2153" for this suite. • [SLOW TEST:17.120 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":288,"completed":137,"skipped":2372,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:14:38.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-44d4ddd9-42ad-4ddf-a939-5c83e30cb3ac STEP: Creating a pod to test consume configMaps May 24 00:14:38.190: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-acef6068-be92-4980-8770-423e152bdb7d" in namespace "projected-7586" to be "Succeeded or Failed" May 24 00:14:38.208: INFO: Pod "pod-projected-configmaps-acef6068-be92-4980-8770-423e152bdb7d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.937461ms May 24 00:14:40.214: INFO: Pod "pod-projected-configmaps-acef6068-be92-4980-8770-423e152bdb7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024260411s May 24 00:14:42.218: INFO: Pod "pod-projected-configmaps-acef6068-be92-4980-8770-423e152bdb7d": Phase="Running", Reason="", readiness=true. Elapsed: 4.028804021s May 24 00:14:44.222: INFO: Pod "pod-projected-configmaps-acef6068-be92-4980-8770-423e152bdb7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032661959s STEP: Saw pod success May 24 00:14:44.222: INFO: Pod "pod-projected-configmaps-acef6068-be92-4980-8770-423e152bdb7d" satisfied condition "Succeeded or Failed" May 24 00:14:44.225: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-acef6068-be92-4980-8770-423e152bdb7d container projected-configmap-volume-test: STEP: delete the pod May 24 00:14:44.311: INFO: Waiting for pod pod-projected-configmaps-acef6068-be92-4980-8770-423e152bdb7d to disappear May 24 00:14:44.323: INFO: Pod pod-projected-configmaps-acef6068-be92-4980-8770-423e152bdb7d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:14:44.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7586" for this suite. • [SLOW TEST:6.254 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":138,"skipped":2419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:14:44.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-cff4bc1b-40e4-4304-8cec-f61d7e5dba4d STEP: Creating configMap with name cm-test-opt-upd-d44df015-92d2-40eb-bf1c-f323d0c8fb4e STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-cff4bc1b-40e4-4304-8cec-f61d7e5dba4d STEP: Updating configmap cm-test-opt-upd-d44df015-92d2-40eb-bf1c-f323d0c8fb4e STEP: Creating configMap with name cm-test-opt-create-bd86718e-eae3-4a37-9d55-8440828ac71a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:14:54.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7819" for this suite. • [SLOW TEST:10.426 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":139,"skipped":2448,"failed":0} SS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:14:54.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 24 00:14:54.876: INFO: Waiting up to 5m0s for pod "downward-api-6cb09762-c16f-4b5d-9403-85d592f6f82c" in namespace "downward-api-2699" to be "Succeeded or Failed" May 24 00:14:54.906: INFO: Pod "downward-api-6cb09762-c16f-4b5d-9403-85d592f6f82c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.162827ms May 24 00:14:56.912: INFO: Pod "downward-api-6cb09762-c16f-4b5d-9403-85d592f6f82c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035889789s May 24 00:14:58.916: INFO: Pod "downward-api-6cb09762-c16f-4b5d-9403-85d592f6f82c": Phase="Running", Reason="", readiness=true. Elapsed: 4.040271129s May 24 00:15:00.920: INFO: Pod "downward-api-6cb09762-c16f-4b5d-9403-85d592f6f82c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044076987s STEP: Saw pod success May 24 00:15:00.920: INFO: Pod "downward-api-6cb09762-c16f-4b5d-9403-85d592f6f82c" satisfied condition "Succeeded or Failed" May 24 00:15:00.923: INFO: Trying to get logs from node latest-worker pod downward-api-6cb09762-c16f-4b5d-9403-85d592f6f82c container dapi-container: STEP: delete the pod May 24 00:15:01.094: INFO: Waiting for pod downward-api-6cb09762-c16f-4b5d-9403-85d592f6f82c to disappear May 24 00:15:01.207: INFO: Pod downward-api-6cb09762-c16f-4b5d-9403-85d592f6f82c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:15:01.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2699" for this suite. • [SLOW TEST:6.459 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":288,"completed":140,"skipped":2450,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:15:01.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-0a33e22c-0293-4102-91e2-6e2be5c527a8 STEP: Creating a pod to test consume secrets May 24 00:15:01.350: INFO: Waiting up to 5m0s for pod "pod-secrets-b234a6f1-303f-43b1-b791-2b93820e5739" in namespace "secrets-5482" to be "Succeeded or Failed" May 24 00:15:01.395: INFO: Pod "pod-secrets-b234a6f1-303f-43b1-b791-2b93820e5739": Phase="Pending", Reason="", readiness=false. Elapsed: 45.196096ms May 24 00:15:03.398: INFO: Pod "pod-secrets-b234a6f1-303f-43b1-b791-2b93820e5739": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047707993s May 24 00:15:05.401: INFO: Pod "pod-secrets-b234a6f1-303f-43b1-b791-2b93820e5739": Phase="Running", Reason="", readiness=true. Elapsed: 4.05130838s May 24 00:15:07.406: INFO: Pod "pod-secrets-b234a6f1-303f-43b1-b791-2b93820e5739": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055745144s STEP: Saw pod success May 24 00:15:07.406: INFO: Pod "pod-secrets-b234a6f1-303f-43b1-b791-2b93820e5739" satisfied condition "Succeeded or Failed" May 24 00:15:07.408: INFO: Trying to get logs from node latest-worker pod pod-secrets-b234a6f1-303f-43b1-b791-2b93820e5739 container secret-volume-test: STEP: delete the pod May 24 00:15:07.439: INFO: Waiting for pod pod-secrets-b234a6f1-303f-43b1-b791-2b93820e5739 to disappear May 24 00:15:07.456: INFO: Pod pod-secrets-b234a6f1-303f-43b1-b791-2b93820e5739 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:15:07.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5482" for this suite. • [SLOW TEST:6.245 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":141,"skipped":2455,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:15:07.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD May 24 00:15:07.531: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:15:21.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-497" for this suite. • [SLOW TEST:13.581 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":288,"completed":142,"skipped":2460,"failed":0} [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:15:21.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0524 00:16:02.290859 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 00:16:02.290: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:16:02.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9736" for this suite. • [SLOW TEST:41.254 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":288,"completed":143,"skipped":2460,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:16:02.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 24 00:16:12.718: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 00:16:12.726: INFO: Pod pod-with-prestop-http-hook still exists May 24 00:16:14.726: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 00:16:14.731: INFO: Pod pod-with-prestop-http-hook still exists May 24 00:16:16.726: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 00:16:16.731: INFO: Pod pod-with-prestop-http-hook still exists May 24 00:16:18.726: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 00:16:18.731: INFO: Pod pod-with-prestop-http-hook still exists May 24 00:16:20.726: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 00:16:20.730: INFO: Pod pod-with-prestop-http-hook still exists May 24 00:16:22.726: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 00:16:22.730: INFO: Pod pod-with-prestop-http-hook still exists May 24 00:16:24.726: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 00:16:24.730: INFO: Pod pod-with-prestop-http-hook still exists May 24 00:16:26.726: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 24 00:16:26.730: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:16:26.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1557" for this suite. • [SLOW TEST:24.448 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":288,"completed":144,"skipped":2464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:16:26.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0524 00:16:27.888980 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 00:16:27.889: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:16:27.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9272" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":288,"completed":145,"skipped":2495,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:16:27.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:16:29.880: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:16:31.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876189, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876189, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876190, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876189, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:16:35.391: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:16:36.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3593" for this suite. STEP: Destroying namespace "webhook-3593-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.348 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":288,"completed":146,"skipped":2498,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:16:36.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-2b1bff9d-f4b2-481c-90d5-6d479567105a STEP: Creating a pod to test consume configMaps May 24 00:16:36.358: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e06cc3d0-1f6e-47fd-bc5c-2b2bee07ca68" in namespace "projected-5038" to be "Succeeded or Failed" May 24 00:16:36.413: INFO: Pod "pod-projected-configmaps-e06cc3d0-1f6e-47fd-bc5c-2b2bee07ca68": Phase="Pending", Reason="", readiness=false. Elapsed: 54.664935ms May 24 00:16:38.418: INFO: Pod "pod-projected-configmaps-e06cc3d0-1f6e-47fd-bc5c-2b2bee07ca68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059866852s May 24 00:16:40.423: INFO: Pod "pod-projected-configmaps-e06cc3d0-1f6e-47fd-bc5c-2b2bee07ca68": Phase="Running", Reason="", readiness=true. Elapsed: 4.064292339s May 24 00:16:42.427: INFO: Pod "pod-projected-configmaps-e06cc3d0-1f6e-47fd-bc5c-2b2bee07ca68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068721461s STEP: Saw pod success May 24 00:16:42.427: INFO: Pod "pod-projected-configmaps-e06cc3d0-1f6e-47fd-bc5c-2b2bee07ca68" satisfied condition "Succeeded or Failed" May 24 00:16:42.431: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-e06cc3d0-1f6e-47fd-bc5c-2b2bee07ca68 container projected-configmap-volume-test: STEP: delete the pod May 24 00:16:42.476: INFO: Waiting for pod pod-projected-configmaps-e06cc3d0-1f6e-47fd-bc5c-2b2bee07ca68 to disappear May 24 00:16:42.490: INFO: Pod pod-projected-configmaps-e06cc3d0-1f6e-47fd-bc5c-2b2bee07ca68 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:16:42.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5038" for this suite. • [SLOW TEST:6.252 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":147,"skipped":2525,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:16:42.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs May 24 00:16:42.602: INFO: Waiting up to 5m0s for pod "pod-d0c43f98-37f3-41eb-a60d-af4569a54fcf" in namespace "emptydir-7196" to be "Succeeded or Failed" May 24 00:16:42.633: INFO: Pod "pod-d0c43f98-37f3-41eb-a60d-af4569a54fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 29.998713ms May 24 00:16:44.636: INFO: Pod "pod-d0c43f98-37f3-41eb-a60d-af4569a54fcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033409354s May 24 00:16:46.640: INFO: Pod "pod-d0c43f98-37f3-41eb-a60d-af4569a54fcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037471344s STEP: Saw pod success May 24 00:16:46.640: INFO: Pod "pod-d0c43f98-37f3-41eb-a60d-af4569a54fcf" satisfied condition "Succeeded or Failed" May 24 00:16:46.643: INFO: Trying to get logs from node latest-worker2 pod pod-d0c43f98-37f3-41eb-a60d-af4569a54fcf container test-container: STEP: delete the pod May 24 00:16:46.688: INFO: Waiting for pod pod-d0c43f98-37f3-41eb-a60d-af4569a54fcf to disappear May 24 00:16:46.735: INFO: Pod pod-d0c43f98-37f3-41eb-a60d-af4569a54fcf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:16:46.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7196" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":148,"skipped":2526,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:16:46.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-b8866aa8-9b85-422c-a33b-cec64475c009 STEP: Creating a pod to test consume secrets May 24 00:16:46.879: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-58a9de3b-ecba-4cf5-8be5-3f4e9cddb4d7" in namespace "projected-7104" to be "Succeeded or Failed" May 24 00:16:46.883: INFO: Pod "pod-projected-secrets-58a9de3b-ecba-4cf5-8be5-3f4e9cddb4d7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.814648ms May 24 00:16:48.888: INFO: Pod "pod-projected-secrets-58a9de3b-ecba-4cf5-8be5-3f4e9cddb4d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008388368s May 24 00:16:50.892: INFO: Pod "pod-projected-secrets-58a9de3b-ecba-4cf5-8be5-3f4e9cddb4d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012587553s STEP: Saw pod success May 24 00:16:50.892: INFO: Pod "pod-projected-secrets-58a9de3b-ecba-4cf5-8be5-3f4e9cddb4d7" satisfied condition "Succeeded or Failed" May 24 00:16:50.895: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-58a9de3b-ecba-4cf5-8be5-3f4e9cddb4d7 container secret-volume-test: STEP: delete the pod May 24 00:16:51.390: INFO: Waiting for pod pod-projected-secrets-58a9de3b-ecba-4cf5-8be5-3f4e9cddb4d7 to disappear May 24 00:16:51.435: INFO: Pod pod-projected-secrets-58a9de3b-ecba-4cf5-8be5-3f4e9cddb4d7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:16:51.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7104" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":288,"completed":149,"skipped":2556,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:16:51.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info May 24 00:16:51.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config cluster-info' May 24 00:16:51.989: INFO: stderr: "" May 24 00:16:51.990: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32773/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:16:51.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9803" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":288,"completed":150,"skipped":2559,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:16:52.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:16:52.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-7832" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":288,"completed":151,"skipped":2574,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:16:52.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:16:53.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3187" for this suite. STEP: Destroying namespace "nspatchtest-900038c6-8394-4ef8-a4bf-785d11fbd704-2118" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":288,"completed":152,"skipped":2580,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:16:53.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:303 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller May 24 00:16:53.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6598' May 24 00:16:53.566: INFO: stderr: "" May 24 00:16:53.566: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 00:16:53.566: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6598' May 24 00:16:53.742: INFO: stderr: "" May 24 00:16:53.742: INFO: stdout: "update-demo-nautilus-mppzz update-demo-nautilus-pm56d " May 24 00:16:53.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mppzz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:16:53.834: INFO: stderr: "" May 24 00:16:53.834: INFO: stdout: "" May 24 00:16:53.834: INFO: update-demo-nautilus-mppzz is created but not running May 24 00:16:58.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6598' May 24 00:16:58.950: INFO: stderr: "" May 24 00:16:58.950: INFO: stdout: "update-demo-nautilus-mppzz update-demo-nautilus-pm56d " May 24 00:16:58.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mppzz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:16:59.053: INFO: stderr: "" May 24 00:16:59.053: INFO: stdout: "true" May 24 00:16:59.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mppzz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:16:59.155: INFO: stderr: "" May 24 00:16:59.155: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 00:16:59.155: INFO: validating pod update-demo-nautilus-mppzz May 24 00:16:59.160: INFO: got data: { "image": "nautilus.jpg" } May 24 00:16:59.161: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 00:16:59.161: INFO: update-demo-nautilus-mppzz is verified up and running May 24 00:16:59.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pm56d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:16:59.282: INFO: stderr: "" May 24 00:16:59.282: INFO: stdout: "true" May 24 00:16:59.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pm56d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:16:59.376: INFO: stderr: "" May 24 00:16:59.377: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 00:16:59.377: INFO: validating pod update-demo-nautilus-pm56d May 24 00:16:59.389: INFO: got data: { "image": "nautilus.jpg" } May 24 00:16:59.389: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 00:16:59.389: INFO: update-demo-nautilus-pm56d is verified up and running STEP: scaling down the replication controller May 24 00:16:59.391: INFO: scanned /root for discovery docs: May 24 00:16:59.391: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6598' May 24 00:17:00.524: INFO: stderr: "" May 24 00:17:00.524: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 00:17:00.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6598' May 24 00:17:00.807: INFO: stderr: "" May 24 00:17:00.807: INFO: stdout: "update-demo-nautilus-mppzz update-demo-nautilus-pm56d " STEP: Replicas for name=update-demo: expected=1 actual=2 May 24 00:17:05.808: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6598' May 24 00:17:05.926: INFO: stderr: "" May 24 00:17:05.927: INFO: stdout: "update-demo-nautilus-pm56d " May 24 00:17:05.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pm56d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:17:06.037: INFO: stderr: "" May 24 00:17:06.037: INFO: stdout: "true" May 24 00:17:06.037: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pm56d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:17:06.148: INFO: stderr: "" May 24 00:17:06.148: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 00:17:06.148: INFO: validating pod update-demo-nautilus-pm56d May 24 00:17:06.152: INFO: got data: { "image": "nautilus.jpg" } May 24 00:17:06.152: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 00:17:06.152: INFO: update-demo-nautilus-pm56d is verified up and running STEP: scaling up the replication controller May 24 00:17:06.155: INFO: scanned /root for discovery docs: May 24 00:17:06.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6598' May 24 00:17:07.303: INFO: stderr: "" May 24 00:17:07.303: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 24 00:17:07.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6598' May 24 00:17:07.420: INFO: stderr: "" May 24 00:17:07.420: INFO: stdout: "update-demo-nautilus-pm56d update-demo-nautilus-tk6rw " May 24 00:17:07.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pm56d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:17:07.520: INFO: stderr: "" May 24 00:17:07.520: INFO: stdout: "true" May 24 00:17:07.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pm56d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:17:07.610: INFO: stderr: "" May 24 00:17:07.610: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 00:17:07.610: INFO: validating pod update-demo-nautilus-pm56d May 24 00:17:07.613: INFO: got data: { "image": "nautilus.jpg" } May 24 00:17:07.613: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 00:17:07.613: INFO: update-demo-nautilus-pm56d is verified up and running May 24 00:17:07.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tk6rw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:17:07.731: INFO: stderr: "" May 24 00:17:07.731: INFO: stdout: "" May 24 00:17:07.731: INFO: update-demo-nautilus-tk6rw is created but not running May 24 00:17:12.732: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6598' May 24 00:17:12.849: INFO: stderr: "" May 24 00:17:12.849: INFO: stdout: "update-demo-nautilus-pm56d update-demo-nautilus-tk6rw " May 24 00:17:12.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pm56d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:17:12.963: INFO: stderr: "" May 24 00:17:12.963: INFO: stdout: "true" May 24 00:17:12.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pm56d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:17:13.057: INFO: stderr: "" May 24 00:17:13.057: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 00:17:13.057: INFO: validating pod update-demo-nautilus-pm56d May 24 00:17:13.060: INFO: got data: { "image": "nautilus.jpg" } May 24 00:17:13.060: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 00:17:13.060: INFO: update-demo-nautilus-pm56d is verified up and running May 24 00:17:13.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tk6rw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:17:13.169: INFO: stderr: "" May 24 00:17:13.169: INFO: stdout: "true" May 24 00:17:13.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tk6rw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6598' May 24 00:17:13.250: INFO: stderr: "" May 24 00:17:13.250: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 24 00:17:13.250: INFO: validating pod update-demo-nautilus-tk6rw May 24 00:17:13.253: INFO: got data: { "image": "nautilus.jpg" } May 24 00:17:13.253: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 24 00:17:13.253: INFO: update-demo-nautilus-tk6rw is verified up and running STEP: using delete to clean up resources May 24 00:17:13.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6598' May 24 00:17:13.368: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 00:17:13.368: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 24 00:17:13.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6598' May 24 00:17:13.463: INFO: stderr: "No resources found in kubectl-6598 namespace.\n" May 24 00:17:13.463: INFO: stdout: "" May 24 00:17:13.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6598 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 24 00:17:13.596: INFO: stderr: "" May 24 00:17:13.596: INFO: stdout: "update-demo-nautilus-pm56d\nupdate-demo-nautilus-tk6rw\n" May 24 00:17:14.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6598' May 24 00:17:14.209: INFO: stderr: "No resources found in kubectl-6598 namespace.\n" May 24 00:17:14.209: INFO: stdout: "" May 24 00:17:14.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6598 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 24 00:17:14.305: INFO: stderr: "" May 24 00:17:14.305: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:17:14.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6598" for this suite. • [SLOW TEST:21.190 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":288,"completed":153,"skipped":2619,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:17:14.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1559 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 24 00:17:14.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4292' May 24 00:17:14.630: INFO: stderr: "" May 24 00:17:14.630: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 24 00:17:19.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-4292 -o json' May 24 00:17:19.782: INFO: stderr: "" May 24 00:17:19.783: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-24T00:17:14Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-24T00:17:14Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.149\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-24T00:17:18Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-4292\",\n \"resourceVersion\": \"7154393\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4292/pods/e2e-test-httpd-pod\",\n \"uid\": \"e4702ec6-d02d-4b0a-b7f8-dc9ccb186d51\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-tk8jc\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-tk8jc\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-tk8jc\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-24T00:17:14Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-24T00:17:18Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-24T00:17:18Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-24T00:17:14Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://39e2ca92cb343a5693dbd10c668e7605966afc0e66f85ae9619a6963b44210c4\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-24T00:17:17Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.149\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.149\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-24T00:17:14Z\"\n }\n}\n" STEP: replace the image in the pod May 24 00:17:19.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4292' May 24 00:17:20.070: INFO: stderr: "" May 24 00:17:20.070: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1564 May 24 00:17:20.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4292' May 24 00:17:34.873: INFO: stderr: "" May 24 00:17:34.873: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:17:34.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4292" for this suite. • [SLOW TEST:20.567 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1555 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":288,"completed":154,"skipped":2635,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:17:34.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:17:39.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5515" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":288,"completed":155,"skipped":2637,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:17:39.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:17:40.428: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:17:42.440: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876260, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876260, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876260, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876260, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:17:44.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876260, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876260, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876260, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876260, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:17:47.477: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:17:47.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5949-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:17:48.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3481" for this suite. STEP: Destroying namespace "webhook-3481-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.329 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":288,"completed":156,"skipped":2639,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:17:48.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-6110bd6a-99e9-424d-9bac-8a8f4e29a4e9 STEP: Creating a pod to test consume secrets May 24 00:17:48.967: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b95b165d-a8db-47b7-b23f-b6f079ccbc35" in namespace "projected-7657" to be "Succeeded or Failed" May 24 00:17:48.978: INFO: Pod "pod-projected-secrets-b95b165d-a8db-47b7-b23f-b6f079ccbc35": Phase="Pending", Reason="", readiness=false. Elapsed: 11.15423ms May 24 00:17:50.981: INFO: Pod "pod-projected-secrets-b95b165d-a8db-47b7-b23f-b6f079ccbc35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014283049s May 24 00:17:52.985: INFO: Pod "pod-projected-secrets-b95b165d-a8db-47b7-b23f-b6f079ccbc35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018464253s STEP: Saw pod success May 24 00:17:52.985: INFO: Pod "pod-projected-secrets-b95b165d-a8db-47b7-b23f-b6f079ccbc35" satisfied condition "Succeeded or Failed" May 24 00:17:52.988: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-b95b165d-a8db-47b7-b23f-b6f079ccbc35 container projected-secret-volume-test: STEP: delete the pod May 24 00:17:53.102: INFO: Waiting for pod pod-projected-secrets-b95b165d-a8db-47b7-b23f-b6f079ccbc35 to disappear May 24 00:17:53.128: INFO: Pod pod-projected-secrets-b95b165d-a8db-47b7-b23f-b6f079ccbc35 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:17:53.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7657" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":157,"skipped":2646,"failed":0} SSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:17:53.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2118 STEP: creating service affinity-nodeport in namespace services-2118 STEP: creating replication controller affinity-nodeport in namespace services-2118 I0524 00:17:53.499156 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-2118, replica count: 3 I0524 00:17:56.549568 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 00:17:59.549797 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 00:17:59.558: INFO: Creating new exec pod May 24 00:18:04.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2118 execpod-affinityjpq4t -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' May 24 00:18:05.000: INFO: stderr: "I0524 00:18:04.803273 2320 log.go:172] (0xc0007b44d0) (0xc0004b43c0) Create stream\nI0524 00:18:04.803328 2320 log.go:172] (0xc0007b44d0) (0xc0004b43c0) Stream added, broadcasting: 1\nI0524 00:18:04.805692 2320 log.go:172] (0xc0007b44d0) Reply frame received for 1\nI0524 00:18:04.805748 2320 log.go:172] (0xc0007b44d0) (0xc00025cfa0) Create stream\nI0524 00:18:04.805769 2320 log.go:172] (0xc0007b44d0) (0xc00025cfa0) Stream added, broadcasting: 3\nI0524 00:18:04.806593 2320 log.go:172] (0xc0007b44d0) Reply frame received for 3\nI0524 00:18:04.806624 2320 log.go:172] (0xc0007b44d0) (0xc00013b180) Create stream\nI0524 00:18:04.806635 2320 log.go:172] (0xc0007b44d0) (0xc00013b180) Stream added, broadcasting: 5\nI0524 00:18:04.807479 2320 log.go:172] (0xc0007b44d0) Reply frame received for 5\nI0524 00:18:04.978432 2320 log.go:172] (0xc0007b44d0) Data frame received for 5\nI0524 00:18:04.978454 2320 log.go:172] (0xc00013b180) (5) Data frame handling\nI0524 00:18:04.978473 2320 log.go:172] (0xc00013b180) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0524 00:18:04.995039 2320 log.go:172] (0xc0007b44d0) Data frame received for 5\nI0524 00:18:04.995080 2320 log.go:172] (0xc00013b180) (5) Data frame handling\nI0524 00:18:04.995111 2320 log.go:172] (0xc00013b180) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0524 00:18:04.995325 2320 log.go:172] (0xc0007b44d0) Data frame received for 3\nI0524 00:18:04.995351 2320 log.go:172] (0xc00025cfa0) (3) Data frame handling\nI0524 00:18:04.995377 2320 log.go:172] (0xc0007b44d0) Data frame received for 5\nI0524 00:18:04.995410 2320 log.go:172] (0xc00013b180) (5) Data frame handling\nI0524 00:18:04.996943 2320 log.go:172] (0xc0007b44d0) Data frame received for 1\nI0524 00:18:04.996960 2320 log.go:172] (0xc0004b43c0) (1) Data frame handling\nI0524 00:18:04.996972 2320 log.go:172] (0xc0004b43c0) (1) Data frame sent\nI0524 00:18:04.996987 2320 log.go:172] (0xc0007b44d0) (0xc0004b43c0) Stream removed, broadcasting: 1\nI0524 00:18:04.997037 2320 log.go:172] (0xc0007b44d0) Go away received\nI0524 00:18:04.997278 2320 log.go:172] (0xc0007b44d0) (0xc0004b43c0) Stream removed, broadcasting: 1\nI0524 00:18:04.997295 2320 log.go:172] (0xc0007b44d0) (0xc00025cfa0) Stream removed, broadcasting: 3\nI0524 00:18:04.997305 2320 log.go:172] (0xc0007b44d0) (0xc00013b180) Stream removed, broadcasting: 5\n" May 24 00:18:05.000: INFO: stdout: "" May 24 00:18:05.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2118 execpod-affinityjpq4t -- /bin/sh -x -c nc -zv -t -w 2 10.104.209.158 80' May 24 00:18:05.203: INFO: stderr: "I0524 00:18:05.124323 2341 log.go:172] (0xc000a06000) (0xc00073ee60) Create stream\nI0524 00:18:05.124386 2341 log.go:172] (0xc000a06000) (0xc00073ee60) Stream added, broadcasting: 1\nI0524 00:18:05.126251 2341 log.go:172] (0xc000a06000) Reply frame received for 1\nI0524 00:18:05.126283 2341 log.go:172] (0xc000a06000) (0xc0006cec80) Create stream\nI0524 00:18:05.126297 2341 log.go:172] (0xc000a06000) (0xc0006cec80) Stream added, broadcasting: 3\nI0524 00:18:05.127265 2341 log.go:172] (0xc000a06000) Reply frame received for 3\nI0524 00:18:05.127289 2341 log.go:172] (0xc000a06000) (0xc000678500) Create stream\nI0524 00:18:05.127297 2341 log.go:172] (0xc000a06000) (0xc000678500) Stream added, broadcasting: 5\nI0524 00:18:05.128299 2341 log.go:172] (0xc000a06000) Reply frame received for 5\nI0524 00:18:05.194881 2341 log.go:172] (0xc000a06000) Data frame received for 5\nI0524 00:18:05.194926 2341 log.go:172] (0xc000678500) (5) Data frame handling\nI0524 00:18:05.194950 2341 log.go:172] (0xc000678500) (5) Data frame sent\nI0524 00:18:05.194962 2341 log.go:172] (0xc000a06000) Data frame received for 5\nI0524 00:18:05.194972 2341 log.go:172] (0xc000678500) (5) Data frame handling\n+ nc -zv -t -w 2 10.104.209.158 80\nConnection to 10.104.209.158 80 port [tcp/http] succeeded!\nI0524 00:18:05.195147 2341 log.go:172] (0xc000a06000) Data frame received for 3\nI0524 00:18:05.195167 2341 log.go:172] (0xc0006cec80) (3) Data frame handling\nI0524 00:18:05.197370 2341 log.go:172] (0xc000a06000) Data frame received for 1\nI0524 00:18:05.197407 2341 log.go:172] (0xc00073ee60) (1) Data frame handling\nI0524 00:18:05.197437 2341 log.go:172] (0xc00073ee60) (1) Data frame sent\nI0524 00:18:05.197469 2341 log.go:172] (0xc000a06000) (0xc00073ee60) Stream removed, broadcasting: 1\nI0524 00:18:05.197500 2341 log.go:172] (0xc000a06000) Go away received\nI0524 00:18:05.197842 2341 log.go:172] (0xc000a06000) (0xc00073ee60) Stream removed, broadcasting: 1\nI0524 00:18:05.197876 2341 log.go:172] (0xc000a06000) (0xc0006cec80) Stream removed, broadcasting: 3\nI0524 00:18:05.197885 2341 log.go:172] (0xc000a06000) (0xc000678500) Stream removed, broadcasting: 5\n" May 24 00:18:05.203: INFO: stdout: "" May 24 00:18:05.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2118 execpod-affinityjpq4t -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31820' May 24 00:18:05.401: INFO: stderr: "I0524 00:18:05.334850 2363 log.go:172] (0xc0007f62c0) (0xc00038bc20) Create stream\nI0524 00:18:05.334913 2363 log.go:172] (0xc0007f62c0) (0xc00038bc20) Stream added, broadcasting: 1\nI0524 00:18:05.337642 2363 log.go:172] (0xc0007f62c0) Reply frame received for 1\nI0524 00:18:05.337684 2363 log.go:172] (0xc0007f62c0) (0xc0001699a0) Create stream\nI0524 00:18:05.337698 2363 log.go:172] (0xc0007f62c0) (0xc0001699a0) Stream added, broadcasting: 3\nI0524 00:18:05.338741 2363 log.go:172] (0xc0007f62c0) Reply frame received for 3\nI0524 00:18:05.338770 2363 log.go:172] (0xc0007f62c0) (0xc00067e280) Create stream\nI0524 00:18:05.338780 2363 log.go:172] (0xc0007f62c0) (0xc00067e280) Stream added, broadcasting: 5\nI0524 00:18:05.339802 2363 log.go:172] (0xc0007f62c0) Reply frame received for 5\nI0524 00:18:05.393743 2363 log.go:172] (0xc0007f62c0) Data frame received for 5\nI0524 00:18:05.393810 2363 log.go:172] (0xc00067e280) (5) Data frame handling\nI0524 00:18:05.393833 2363 log.go:172] (0xc00067e280) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 31820\nConnection to 172.17.0.13 31820 port [tcp/31820] succeeded!\nI0524 00:18:05.393855 2363 log.go:172] (0xc0007f62c0) Data frame received for 3\nI0524 00:18:05.393882 2363 log.go:172] (0xc0001699a0) (3) Data frame handling\nI0524 00:18:05.394120 2363 log.go:172] (0xc0007f62c0) Data frame received for 5\nI0524 00:18:05.394207 2363 log.go:172] (0xc00067e280) (5) Data frame handling\nI0524 00:18:05.395928 2363 log.go:172] (0xc0007f62c0) Data frame received for 1\nI0524 00:18:05.395941 2363 log.go:172] (0xc00038bc20) (1) Data frame handling\nI0524 00:18:05.395947 2363 log.go:172] (0xc00038bc20) (1) Data frame sent\nI0524 00:18:05.395960 2363 log.go:172] (0xc0007f62c0) (0xc00038bc20) Stream removed, broadcasting: 1\nI0524 00:18:05.396102 2363 log.go:172] (0xc0007f62c0) Go away received\nI0524 00:18:05.396243 2363 log.go:172] (0xc0007f62c0) (0xc00038bc20) Stream removed, broadcasting: 1\nI0524 00:18:05.396261 2363 log.go:172] (0xc0007f62c0) (0xc0001699a0) Stream removed, broadcasting: 3\nI0524 00:18:05.396272 2363 log.go:172] (0xc0007f62c0) (0xc00067e280) Stream removed, broadcasting: 5\n" May 24 00:18:05.401: INFO: stdout: "" May 24 00:18:05.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2118 execpod-affinityjpq4t -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31820' May 24 00:18:05.594: INFO: stderr: "I0524 00:18:05.526171 2383 log.go:172] (0xc000bd6580) (0xc000940d20) Create stream\nI0524 00:18:05.526232 2383 log.go:172] (0xc000bd6580) (0xc000940d20) Stream added, broadcasting: 1\nI0524 00:18:05.528987 2383 log.go:172] (0xc000bd6580) Reply frame received for 1\nI0524 00:18:05.529024 2383 log.go:172] (0xc000bd6580) (0xc000927400) Create stream\nI0524 00:18:05.529040 2383 log.go:172] (0xc000bd6580) (0xc000927400) Stream added, broadcasting: 3\nI0524 00:18:05.530172 2383 log.go:172] (0xc000bd6580) Reply frame received for 3\nI0524 00:18:05.530216 2383 log.go:172] (0xc000bd6580) (0xc000920c80) Create stream\nI0524 00:18:05.530226 2383 log.go:172] (0xc000bd6580) (0xc000920c80) Stream added, broadcasting: 5\nI0524 00:18:05.531165 2383 log.go:172] (0xc000bd6580) Reply frame received for 5\nI0524 00:18:05.586219 2383 log.go:172] (0xc000bd6580) Data frame received for 3\nI0524 00:18:05.586273 2383 log.go:172] (0xc000927400) (3) Data frame handling\nI0524 00:18:05.586319 2383 log.go:172] (0xc000bd6580) Data frame received for 5\nI0524 00:18:05.586354 2383 log.go:172] (0xc000920c80) (5) Data frame handling\nI0524 00:18:05.586374 2383 log.go:172] (0xc000920c80) (5) Data frame sent\nI0524 00:18:05.586389 2383 log.go:172] (0xc000bd6580) Data frame received for 5\nI0524 00:18:05.586399 2383 log.go:172] (0xc000920c80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31820\nConnection to 172.17.0.12 31820 port [tcp/31820] succeeded!\nI0524 00:18:05.587709 2383 log.go:172] (0xc000bd6580) Data frame received for 1\nI0524 00:18:05.587752 2383 log.go:172] (0xc000940d20) (1) Data frame handling\nI0524 00:18:05.587792 2383 log.go:172] (0xc000940d20) (1) Data frame sent\nI0524 00:18:05.587823 2383 log.go:172] (0xc000bd6580) (0xc000940d20) Stream removed, broadcasting: 1\nI0524 00:18:05.587853 2383 log.go:172] (0xc000bd6580) Go away received\nI0524 00:18:05.588291 2383 log.go:172] (0xc000bd6580) (0xc000940d20) Stream removed, broadcasting: 1\nI0524 00:18:05.588315 2383 log.go:172] (0xc000bd6580) (0xc000927400) Stream removed, broadcasting: 3\nI0524 00:18:05.588326 2383 log.go:172] (0xc000bd6580) (0xc000920c80) Stream removed, broadcasting: 5\n" May 24 00:18:05.594: INFO: stdout: "" May 24 00:18:05.594: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2118 execpod-affinityjpq4t -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31820/ ; done' May 24 00:18:05.929: INFO: stderr: "I0524 00:18:05.742825 2403 log.go:172] (0xc000a7e840) (0xc00050ac80) Create stream\nI0524 00:18:05.742870 2403 log.go:172] (0xc000a7e840) (0xc00050ac80) Stream added, broadcasting: 1\nI0524 00:18:05.744967 2403 log.go:172] (0xc000a7e840) Reply frame received for 1\nI0524 00:18:05.745010 2403 log.go:172] (0xc000a7e840) (0xc0001510e0) Create stream\nI0524 00:18:05.745027 2403 log.go:172] (0xc000a7e840) (0xc0001510e0) Stream added, broadcasting: 3\nI0524 00:18:05.746179 2403 log.go:172] (0xc000a7e840) Reply frame received for 3\nI0524 00:18:05.746214 2403 log.go:172] (0xc000a7e840) (0xc000251e00) Create stream\nI0524 00:18:05.746228 2403 log.go:172] (0xc000a7e840) (0xc000251e00) Stream added, broadcasting: 5\nI0524 00:18:05.747002 2403 log.go:172] (0xc000a7e840) Reply frame received for 5\nI0524 00:18:05.822309 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.822341 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.822353 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.822370 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.822378 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.822387 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.827966 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.827997 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.828022 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.828515 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.828533 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.828539 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.828548 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.828554 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.828559 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.836167 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.836193 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.836221 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.836753 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.836783 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.836796 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.836824 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.836836 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.836849 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.842705 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.842736 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.842772 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.843421 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.843440 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.843460 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.843481 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.843492 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.843508 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.847395 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.847417 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.847434 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.847829 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.847874 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.847900 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.847938 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.847960 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.847993 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.854060 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.854076 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.854084 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.854580 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.854600 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.854616 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.854634 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.854667 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.854684 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ echo\nI0524 00:18:05.854699 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.854726 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.854742 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.861711 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.861732 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.861742 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.862331 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.862371 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.862386 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.862421 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.862443 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.862458 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.866809 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.866829 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.866848 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.867242 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.867269 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.867279 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.867289 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.867297 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.867309 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.873785 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.873804 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.873821 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.874391 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.874418 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.874439 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0524 00:18:05.874534 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.874568 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.874588 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.874610 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.874622 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.874634 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n http://172.17.0.13:31820/\nI0524 00:18:05.879385 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.879417 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.879443 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.879954 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.879979 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.880006 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.880028 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.880071 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.880114 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.883754 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.883767 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.883775 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.884252 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.884263 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.884269 2403 log.go:172] (0xc000251e00) (5) Data frame sent\nI0524 00:18:05.884277 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.884283 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.884288 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.888344 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.888358 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.888366 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.888774 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.888790 2403 log.go:172] (0xc000251e00) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/I0524 00:18:05.888811 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.888838 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.888851 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.888870 2403 log.go:172] (0xc000251e00) (5) Data frame sent\nI0524 00:18:05.888882 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.888891 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.888907 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n\nI0524 00:18:05.895796 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.895816 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.895832 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.896260 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.896280 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.896289 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.896298 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.896305 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.896313 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.900626 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.900641 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.900653 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.901333 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.901347 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.901355 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.901370 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.901390 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.901409 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.907539 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.907561 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.907589 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.908435 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.908451 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.908488 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.908535 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.908553 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.908577 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.912639 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.912665 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.912710 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.913354 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.913391 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.913424 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.913442 2403 log.go:172] (0xc000251e00) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31820/\nI0524 00:18:05.913474 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.913491 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.920145 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.920162 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.920172 2403 log.go:172] (0xc0001510e0) (3) Data frame sent\nI0524 00:18:05.921568 2403 log.go:172] (0xc000a7e840) Data frame received for 3\nI0524 00:18:05.921603 2403 log.go:172] (0xc0001510e0) (3) Data frame handling\nI0524 00:18:05.921822 2403 log.go:172] (0xc000a7e840) Data frame received for 5\nI0524 00:18:05.921840 2403 log.go:172] (0xc000251e00) (5) Data frame handling\nI0524 00:18:05.923838 2403 log.go:172] (0xc000a7e840) Data frame received for 1\nI0524 00:18:05.923861 2403 log.go:172] (0xc00050ac80) (1) Data frame handling\nI0524 00:18:05.923876 2403 log.go:172] (0xc00050ac80) (1) Data frame sent\nI0524 00:18:05.923892 2403 log.go:172] (0xc000a7e840) (0xc00050ac80) Stream removed, broadcasting: 1\nI0524 00:18:05.923911 2403 log.go:172] (0xc000a7e840) Go away received\nI0524 00:18:05.924340 2403 log.go:172] (0xc000a7e840) (0xc00050ac80) Stream removed, broadcasting: 1\nI0524 00:18:05.924362 2403 log.go:172] (0xc000a7e840) (0xc0001510e0) Stream removed, broadcasting: 3\nI0524 00:18:05.924373 2403 log.go:172] (0xc000a7e840) (0xc000251e00) Stream removed, broadcasting: 5\n" May 24 00:18:05.930: INFO: stdout: "\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5\naffinity-nodeport-wdtq5" May 24 00:18:05.930: INFO: Received response from host: May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Received response from host: affinity-nodeport-wdtq5 May 24 00:18:05.930: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-2118, will wait for the garbage collector to delete the pods May 24 00:18:06.044: INFO: Deleting ReplicationController affinity-nodeport took: 6.380625ms May 24 00:18:06.344: INFO: Terminating ReplicationController affinity-nodeport pods took: 300.265182ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:18:15.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2118" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:21.904 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":158,"skipped":2649,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:18:15.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 24 00:18:15.156: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:18:25.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2459" for this suite. • [SLOW TEST:10.197 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":288,"completed":159,"skipped":2657,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:18:25.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9917 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9917 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9917 May 24 00:18:25.406: INFO: Found 0 stateful pods, waiting for 1 May 24 00:18:35.411: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 24 00:18:35.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9917 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 00:18:35.662: INFO: stderr: "I0524 00:18:35.552508 2424 log.go:172] (0xc000b31130) (0xc000bce5a0) Create stream\nI0524 00:18:35.552565 2424 log.go:172] (0xc000b31130) (0xc000bce5a0) Stream added, broadcasting: 1\nI0524 00:18:35.557881 2424 log.go:172] (0xc000b31130) Reply frame received for 1\nI0524 00:18:35.557919 2424 log.go:172] (0xc000b31130) (0xc000744dc0) Create stream\nI0524 00:18:35.557929 2424 log.go:172] (0xc000b31130) (0xc000744dc0) Stream added, broadcasting: 3\nI0524 00:18:35.558810 2424 log.go:172] (0xc000b31130) Reply frame received for 3\nI0524 00:18:35.558847 2424 log.go:172] (0xc000b31130) (0xc00061c460) Create stream\nI0524 00:18:35.558856 2424 log.go:172] (0xc000b31130) (0xc00061c460) Stream added, broadcasting: 5\nI0524 00:18:35.559746 2424 log.go:172] (0xc000b31130) Reply frame received for 5\nI0524 00:18:35.626386 2424 log.go:172] (0xc000b31130) Data frame received for 5\nI0524 00:18:35.626424 2424 log.go:172] (0xc00061c460) (5) Data frame handling\nI0524 00:18:35.626447 2424 log.go:172] (0xc00061c460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 00:18:35.654809 2424 log.go:172] (0xc000b31130) Data frame received for 3\nI0524 00:18:35.654857 2424 log.go:172] (0xc000744dc0) (3) Data frame handling\nI0524 00:18:35.654879 2424 log.go:172] (0xc000744dc0) (3) Data frame sent\nI0524 00:18:35.655296 2424 log.go:172] (0xc000b31130) Data frame received for 3\nI0524 00:18:35.655336 2424 log.go:172] (0xc000744dc0) (3) Data frame handling\nI0524 00:18:35.655363 2424 log.go:172] (0xc000b31130) Data frame received for 5\nI0524 00:18:35.655379 2424 log.go:172] (0xc00061c460) (5) Data frame handling\nI0524 00:18:35.657087 2424 log.go:172] (0xc000b31130) Data frame received for 1\nI0524 00:18:35.657319 2424 log.go:172] (0xc000bce5a0) (1) Data frame handling\nI0524 00:18:35.657346 2424 log.go:172] (0xc000bce5a0) (1) Data frame sent\nI0524 00:18:35.657362 2424 log.go:172] (0xc000b31130) (0xc000bce5a0) Stream removed, broadcasting: 1\nI0524 00:18:35.657383 2424 log.go:172] (0xc000b31130) Go away received\nI0524 00:18:35.657640 2424 log.go:172] (0xc000b31130) (0xc000bce5a0) Stream removed, broadcasting: 1\nI0524 00:18:35.657658 2424 log.go:172] (0xc000b31130) (0xc000744dc0) Stream removed, broadcasting: 3\nI0524 00:18:35.657665 2424 log.go:172] (0xc000b31130) (0xc00061c460) Stream removed, broadcasting: 5\n" May 24 00:18:35.662: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 00:18:35.662: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 00:18:35.666: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 24 00:18:45.671: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 00:18:45.671: INFO: Waiting for statefulset status.replicas updated to 0 May 24 00:18:45.692: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999101s May 24 00:18:46.696: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.989299461s May 24 00:18:47.700: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.984724146s May 24 00:18:48.705: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.980474191s May 24 00:18:49.709: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.975791127s May 24 00:18:50.714: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972075506s May 24 00:18:51.718: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.967339535s May 24 00:18:52.721: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.963307204s May 24 00:18:53.726: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.9595155s May 24 00:18:54.731: INFO: Verifying statefulset ss doesn't scale past 1 for another 955.044017ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9917 May 24 00:18:55.734: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 00:18:55.937: INFO: stderr: "I0524 00:18:55.864594 2446 log.go:172] (0xc000418000) (0xc0004d8e60) Create stream\nI0524 00:18:55.864664 2446 log.go:172] (0xc000418000) (0xc0004d8e60) Stream added, broadcasting: 1\nI0524 00:18:55.867900 2446 log.go:172] (0xc000418000) Reply frame received for 1\nI0524 00:18:55.867964 2446 log.go:172] (0xc000418000) (0xc000328820) Create stream\nI0524 00:18:55.867982 2446 log.go:172] (0xc000418000) (0xc000328820) Stream added, broadcasting: 3\nI0524 00:18:55.868745 2446 log.go:172] (0xc000418000) Reply frame received for 3\nI0524 00:18:55.868766 2446 log.go:172] (0xc000418000) (0xc0003c8000) Create stream\nI0524 00:18:55.868773 2446 log.go:172] (0xc000418000) (0xc0003c8000) Stream added, broadcasting: 5\nI0524 00:18:55.869813 2446 log.go:172] (0xc000418000) Reply frame received for 5\nI0524 00:18:55.930878 2446 log.go:172] (0xc000418000) Data frame received for 3\nI0524 00:18:55.930920 2446 log.go:172] (0xc000328820) (3) Data frame handling\nI0524 00:18:55.930933 2446 log.go:172] (0xc000328820) (3) Data frame sent\nI0524 00:18:55.930941 2446 log.go:172] (0xc000418000) Data frame received for 3\nI0524 00:18:55.930961 2446 log.go:172] (0xc000328820) (3) Data frame handling\nI0524 00:18:55.931030 2446 log.go:172] (0xc000418000) Data frame received for 5\nI0524 00:18:55.931067 2446 log.go:172] (0xc0003c8000) (5) Data frame handling\nI0524 00:18:55.931094 2446 log.go:172] (0xc0003c8000) (5) Data frame sent\nI0524 00:18:55.931104 2446 log.go:172] (0xc000418000) Data frame received for 5\nI0524 00:18:55.931110 2446 log.go:172] (0xc0003c8000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 00:18:55.932371 2446 log.go:172] (0xc000418000) Data frame received for 1\nI0524 00:18:55.932434 2446 log.go:172] (0xc0004d8e60) (1) Data frame handling\nI0524 00:18:55.932451 2446 log.go:172] (0xc0004d8e60) (1) Data frame sent\nI0524 00:18:55.932469 2446 log.go:172] (0xc000418000) (0xc0004d8e60) Stream removed, broadcasting: 1\nI0524 00:18:55.932492 2446 log.go:172] (0xc000418000) Go away received\nI0524 00:18:55.932883 2446 log.go:172] (0xc000418000) (0xc0004d8e60) Stream removed, broadcasting: 1\nI0524 00:18:55.932912 2446 log.go:172] (0xc000418000) (0xc000328820) Stream removed, broadcasting: 3\nI0524 00:18:55.932931 2446 log.go:172] (0xc000418000) (0xc0003c8000) Stream removed, broadcasting: 5\n" May 24 00:18:55.937: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 00:18:55.937: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 00:18:55.941: INFO: Found 1 stateful pods, waiting for 3 May 24 00:19:05.945: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 24 00:19:05.946: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 24 00:19:05.946: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 24 00:19:05.955: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9917 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 00:19:06.195: INFO: stderr: "I0524 00:19:06.099502 2466 log.go:172] (0xc0000e0d10) (0xc00051b5e0) Create stream\nI0524 00:19:06.099543 2466 log.go:172] (0xc0000e0d10) (0xc00051b5e0) Stream added, broadcasting: 1\nI0524 00:19:06.100841 2466 log.go:172] (0xc0000e0d10) Reply frame received for 1\nI0524 00:19:06.100866 2466 log.go:172] (0xc0000e0d10) (0xc000918640) Create stream\nI0524 00:19:06.100873 2466 log.go:172] (0xc0000e0d10) (0xc000918640) Stream added, broadcasting: 3\nI0524 00:19:06.101626 2466 log.go:172] (0xc0000e0d10) Reply frame received for 3\nI0524 00:19:06.101653 2466 log.go:172] (0xc0000e0d10) (0xc0008f4280) Create stream\nI0524 00:19:06.101661 2466 log.go:172] (0xc0000e0d10) (0xc0008f4280) Stream added, broadcasting: 5\nI0524 00:19:06.102270 2466 log.go:172] (0xc0000e0d10) Reply frame received for 5\nI0524 00:19:06.190373 2466 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0524 00:19:06.190444 2466 log.go:172] (0xc0008f4280) (5) Data frame handling\nI0524 00:19:06.190466 2466 log.go:172] (0xc0008f4280) (5) Data frame sent\nI0524 00:19:06.190481 2466 log.go:172] (0xc0000e0d10) Data frame received for 5\nI0524 00:19:06.190498 2466 log.go:172] (0xc0008f4280) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 00:19:06.190526 2466 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0524 00:19:06.190563 2466 log.go:172] (0xc000918640) (3) Data frame handling\nI0524 00:19:06.190588 2466 log.go:172] (0xc000918640) (3) Data frame sent\nI0524 00:19:06.190608 2466 log.go:172] (0xc0000e0d10) Data frame received for 3\nI0524 00:19:06.190619 2466 log.go:172] (0xc000918640) (3) Data frame handling\nI0524 00:19:06.191632 2466 log.go:172] (0xc0000e0d10) Data frame received for 1\nI0524 00:19:06.191664 2466 log.go:172] (0xc00051b5e0) (1) Data frame handling\nI0524 00:19:06.191678 2466 log.go:172] (0xc00051b5e0) (1) Data frame sent\nI0524 00:19:06.191696 2466 log.go:172] (0xc0000e0d10) (0xc00051b5e0) Stream removed, broadcasting: 1\nI0524 00:19:06.191756 2466 log.go:172] (0xc0000e0d10) Go away received\nI0524 00:19:06.192120 2466 log.go:172] (0xc0000e0d10) (0xc00051b5e0) Stream removed, broadcasting: 1\nI0524 00:19:06.192150 2466 log.go:172] (0xc0000e0d10) (0xc000918640) Stream removed, broadcasting: 3\nI0524 00:19:06.192159 2466 log.go:172] (0xc0000e0d10) (0xc0008f4280) Stream removed, broadcasting: 5\n" May 24 00:19:06.195: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 00:19:06.195: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 00:19:06.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9917 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 00:19:06.450: INFO: stderr: "I0524 00:19:06.335462 2485 log.go:172] (0xc000ad13f0) (0xc000abc3c0) Create stream\nI0524 00:19:06.335543 2485 log.go:172] (0xc000ad13f0) (0xc000abc3c0) Stream added, broadcasting: 1\nI0524 00:19:06.340397 2485 log.go:172] (0xc000ad13f0) Reply frame received for 1\nI0524 00:19:06.340447 2485 log.go:172] (0xc000ad13f0) (0xc000430e60) Create stream\nI0524 00:19:06.340459 2485 log.go:172] (0xc000ad13f0) (0xc000430e60) Stream added, broadcasting: 3\nI0524 00:19:06.341628 2485 log.go:172] (0xc000ad13f0) Reply frame received for 3\nI0524 00:19:06.341665 2485 log.go:172] (0xc000ad13f0) (0xc000360140) Create stream\nI0524 00:19:06.341681 2485 log.go:172] (0xc000ad13f0) (0xc000360140) Stream added, broadcasting: 5\nI0524 00:19:06.342603 2485 log.go:172] (0xc000ad13f0) Reply frame received for 5\nI0524 00:19:06.411546 2485 log.go:172] (0xc000ad13f0) Data frame received for 5\nI0524 00:19:06.411600 2485 log.go:172] (0xc000360140) (5) Data frame handling\nI0524 00:19:06.411635 2485 log.go:172] (0xc000360140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 00:19:06.440277 2485 log.go:172] (0xc000ad13f0) Data frame received for 3\nI0524 00:19:06.440328 2485 log.go:172] (0xc000430e60) (3) Data frame handling\nI0524 00:19:06.440356 2485 log.go:172] (0xc000430e60) (3) Data frame sent\nI0524 00:19:06.440535 2485 log.go:172] (0xc000ad13f0) Data frame received for 5\nI0524 00:19:06.440570 2485 log.go:172] (0xc000360140) (5) Data frame handling\nI0524 00:19:06.440616 2485 log.go:172] (0xc000ad13f0) Data frame received for 3\nI0524 00:19:06.440640 2485 log.go:172] (0xc000430e60) (3) Data frame handling\nI0524 00:19:06.443539 2485 log.go:172] (0xc000ad13f0) Data frame received for 1\nI0524 00:19:06.443631 2485 log.go:172] (0xc000abc3c0) (1) Data frame handling\nI0524 00:19:06.443680 2485 log.go:172] (0xc000abc3c0) (1) Data frame sent\nI0524 00:19:06.443722 2485 log.go:172] (0xc000ad13f0) (0xc000abc3c0) Stream removed, broadcasting: 1\nI0524 00:19:06.443752 2485 log.go:172] (0xc000ad13f0) Go away received\nI0524 00:19:06.444300 2485 log.go:172] (0xc000ad13f0) (0xc000abc3c0) Stream removed, broadcasting: 1\nI0524 00:19:06.444334 2485 log.go:172] (0xc000ad13f0) (0xc000430e60) Stream removed, broadcasting: 3\nI0524 00:19:06.444357 2485 log.go:172] (0xc000ad13f0) (0xc000360140) Stream removed, broadcasting: 5\n" May 24 00:19:06.450: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 00:19:06.450: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 00:19:06.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9917 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 00:19:06.718: INFO: stderr: "I0524 00:19:06.589948 2506 log.go:172] (0xc000ae3550) (0xc0009a4460) Create stream\nI0524 00:19:06.590021 2506 log.go:172] (0xc000ae3550) (0xc0009a4460) Stream added, broadcasting: 1\nI0524 00:19:06.594975 2506 log.go:172] (0xc000ae3550) Reply frame received for 1\nI0524 00:19:06.595027 2506 log.go:172] (0xc000ae3550) (0xc0005201e0) Create stream\nI0524 00:19:06.595042 2506 log.go:172] (0xc000ae3550) (0xc0005201e0) Stream added, broadcasting: 3\nI0524 00:19:06.596055 2506 log.go:172] (0xc000ae3550) Reply frame received for 3\nI0524 00:19:06.596101 2506 log.go:172] (0xc000ae3550) (0xc000458d20) Create stream\nI0524 00:19:06.596120 2506 log.go:172] (0xc000ae3550) (0xc000458d20) Stream added, broadcasting: 5\nI0524 00:19:06.597096 2506 log.go:172] (0xc000ae3550) Reply frame received for 5\nI0524 00:19:06.661359 2506 log.go:172] (0xc000ae3550) Data frame received for 5\nI0524 00:19:06.661414 2506 log.go:172] (0xc000458d20) (5) Data frame handling\nI0524 00:19:06.661440 2506 log.go:172] (0xc000458d20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 00:19:06.710472 2506 log.go:172] (0xc000ae3550) Data frame received for 3\nI0524 00:19:06.710498 2506 log.go:172] (0xc0005201e0) (3) Data frame handling\nI0524 00:19:06.710516 2506 log.go:172] (0xc000ae3550) Data frame received for 5\nI0524 00:19:06.710538 2506 log.go:172] (0xc000458d20) (5) Data frame handling\nI0524 00:19:06.710563 2506 log.go:172] (0xc0005201e0) (3) Data frame sent\nI0524 00:19:06.710585 2506 log.go:172] (0xc000ae3550) Data frame received for 3\nI0524 00:19:06.710600 2506 log.go:172] (0xc0005201e0) (3) Data frame handling\nI0524 00:19:06.712548 2506 log.go:172] (0xc000ae3550) Data frame received for 1\nI0524 00:19:06.712579 2506 log.go:172] (0xc0009a4460) (1) Data frame handling\nI0524 00:19:06.712613 2506 log.go:172] (0xc0009a4460) (1) Data frame sent\nI0524 00:19:06.712633 2506 log.go:172] (0xc000ae3550) (0xc0009a4460) Stream removed, broadcasting: 1\nI0524 00:19:06.712650 2506 log.go:172] (0xc000ae3550) Go away received\nI0524 00:19:06.713298 2506 log.go:172] (0xc000ae3550) (0xc0009a4460) Stream removed, broadcasting: 1\nI0524 00:19:06.713337 2506 log.go:172] (0xc000ae3550) (0xc0005201e0) Stream removed, broadcasting: 3\nI0524 00:19:06.713365 2506 log.go:172] (0xc000ae3550) (0xc000458d20) Stream removed, broadcasting: 5\n" May 24 00:19:06.718: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 00:19:06.718: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 00:19:06.718: INFO: Waiting for statefulset status.replicas updated to 0 May 24 00:19:06.722: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 24 00:19:16.729: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 24 00:19:16.729: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 24 00:19:16.729: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 24 00:19:16.792: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999458s May 24 00:19:17.826: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.978254053s May 24 00:19:18.832: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.943649709s May 24 00:19:19.836: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.938220145s May 24 00:19:20.850: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.933872771s May 24 00:19:21.857: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.919658529s May 24 00:19:22.869: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.913018522s May 24 00:19:23.916: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.901005173s May 24 00:19:24.922: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.854052934s May 24 00:19:25.946: INFO: Verifying statefulset ss doesn't scale past 3 for another 847.819588ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9917 May 24 00:19:26.960: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9917 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 00:19:27.194: INFO: stderr: "I0524 00:19:27.088512 2526 log.go:172] (0xc000ae76b0) (0xc000996640) Create stream\nI0524 00:19:27.088564 2526 log.go:172] (0xc000ae76b0) (0xc000996640) Stream added, broadcasting: 1\nI0524 00:19:27.092860 2526 log.go:172] (0xc000ae76b0) Reply frame received for 1\nI0524 00:19:27.092901 2526 log.go:172] (0xc000ae76b0) (0xc0006bafa0) Create stream\nI0524 00:19:27.092909 2526 log.go:172] (0xc000ae76b0) (0xc0006bafa0) Stream added, broadcasting: 3\nI0524 00:19:27.094376 2526 log.go:172] (0xc000ae76b0) Reply frame received for 3\nI0524 00:19:27.094428 2526 log.go:172] (0xc000ae76b0) (0xc00056c640) Create stream\nI0524 00:19:27.094442 2526 log.go:172] (0xc000ae76b0) (0xc00056c640) Stream added, broadcasting: 5\nI0524 00:19:27.095426 2526 log.go:172] (0xc000ae76b0) Reply frame received for 5\nI0524 00:19:27.186333 2526 log.go:172] (0xc000ae76b0) Data frame received for 5\nI0524 00:19:27.186364 2526 log.go:172] (0xc00056c640) (5) Data frame handling\nI0524 00:19:27.186376 2526 log.go:172] (0xc00056c640) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 00:19:27.186399 2526 log.go:172] (0xc000ae76b0) Data frame received for 3\nI0524 00:19:27.186418 2526 log.go:172] (0xc0006bafa0) (3) Data frame handling\nI0524 00:19:27.186429 2526 log.go:172] (0xc0006bafa0) (3) Data frame sent\nI0524 00:19:27.186438 2526 log.go:172] (0xc000ae76b0) Data frame received for 3\nI0524 00:19:27.186447 2526 log.go:172] (0xc0006bafa0) (3) Data frame handling\nI0524 00:19:27.186458 2526 log.go:172] (0xc000ae76b0) Data frame received for 5\nI0524 00:19:27.186480 2526 log.go:172] (0xc00056c640) (5) Data frame handling\nI0524 00:19:27.188009 2526 log.go:172] (0xc000ae76b0) Data frame received for 1\nI0524 00:19:27.188156 2526 log.go:172] (0xc000996640) (1) Data frame handling\nI0524 00:19:27.188203 2526 log.go:172] (0xc000996640) (1) Data frame sent\nI0524 00:19:27.188223 2526 log.go:172] (0xc000ae76b0) (0xc000996640) Stream removed, broadcasting: 1\nI0524 00:19:27.188244 2526 log.go:172] (0xc000ae76b0) Go away received\nI0524 00:19:27.188709 2526 log.go:172] (0xc000ae76b0) (0xc000996640) Stream removed, broadcasting: 1\nI0524 00:19:27.188738 2526 log.go:172] (0xc000ae76b0) (0xc0006bafa0) Stream removed, broadcasting: 3\nI0524 00:19:27.188752 2526 log.go:172] (0xc000ae76b0) (0xc00056c640) Stream removed, broadcasting: 5\n" May 24 00:19:27.194: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 00:19:27.194: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 00:19:27.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9917 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 00:19:27.541: INFO: stderr: "I0524 00:19:27.340224 2546 log.go:172] (0xc000b1f6b0) (0xc000272140) Create stream\nI0524 00:19:27.340288 2546 log.go:172] (0xc000b1f6b0) (0xc000272140) Stream added, broadcasting: 1\nI0524 00:19:27.342837 2546 log.go:172] (0xc000b1f6b0) Reply frame received for 1\nI0524 00:19:27.342886 2546 log.go:172] (0xc000b1f6b0) (0xc0003c26e0) Create stream\nI0524 00:19:27.342908 2546 log.go:172] (0xc000b1f6b0) (0xc0003c26e0) Stream added, broadcasting: 3\nI0524 00:19:27.344058 2546 log.go:172] (0xc000b1f6b0) Reply frame received for 3\nI0524 00:19:27.344100 2546 log.go:172] (0xc000b1f6b0) (0xc0004aa3c0) Create stream\nI0524 00:19:27.344111 2546 log.go:172] (0xc000b1f6b0) (0xc0004aa3c0) Stream added, broadcasting: 5\nI0524 00:19:27.344849 2546 log.go:172] (0xc000b1f6b0) Reply frame received for 5\nI0524 00:19:27.532795 2546 log.go:172] (0xc000b1f6b0) Data frame received for 3\nI0524 00:19:27.532931 2546 log.go:172] (0xc0003c26e0) (3) Data frame handling\nI0524 00:19:27.532968 2546 log.go:172] (0xc0003c26e0) (3) Data frame sent\nI0524 00:19:27.532996 2546 log.go:172] (0xc000b1f6b0) Data frame received for 3\nI0524 00:19:27.533011 2546 log.go:172] (0xc0003c26e0) (3) Data frame handling\nI0524 00:19:27.533034 2546 log.go:172] (0xc000b1f6b0) Data frame received for 5\nI0524 00:19:27.533049 2546 log.go:172] (0xc0004aa3c0) (5) Data frame handling\nI0524 00:19:27.533067 2546 log.go:172] (0xc0004aa3c0) (5) Data frame sent\nI0524 00:19:27.533088 2546 log.go:172] (0xc000b1f6b0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 00:19:27.533107 2546 log.go:172] (0xc0004aa3c0) (5) Data frame handling\nI0524 00:19:27.537959 2546 log.go:172] (0xc000b1f6b0) Data frame received for 1\nI0524 00:19:27.537988 2546 log.go:172] (0xc000272140) (1) Data frame handling\nI0524 00:19:27.538003 2546 log.go:172] (0xc000272140) (1) Data frame sent\nI0524 00:19:27.538028 2546 log.go:172] (0xc000b1f6b0) (0xc000272140) Stream removed, broadcasting: 1\nI0524 00:19:27.538375 2546 log.go:172] (0xc000b1f6b0) (0xc000272140) Stream removed, broadcasting: 1\nI0524 00:19:27.538404 2546 log.go:172] (0xc000b1f6b0) (0xc0003c26e0) Stream removed, broadcasting: 3\nI0524 00:19:27.538417 2546 log.go:172] (0xc000b1f6b0) (0xc0004aa3c0) Stream removed, broadcasting: 5\n" May 24 00:19:27.542: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 00:19:27.542: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 00:19:27.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9917 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 00:19:27.740: INFO: stderr: "I0524 00:19:27.674606 2565 log.go:172] (0xc000a8b600) (0xc0008325a0) Create stream\nI0524 00:19:27.674688 2565 log.go:172] (0xc000a8b600) (0xc0008325a0) Stream added, broadcasting: 1\nI0524 00:19:27.679460 2565 log.go:172] (0xc000a8b600) Reply frame received for 1\nI0524 00:19:27.679506 2565 log.go:172] (0xc000a8b600) (0xc0008274a0) Create stream\nI0524 00:19:27.679524 2565 log.go:172] (0xc000a8b600) (0xc0008274a0) Stream added, broadcasting: 3\nI0524 00:19:27.680539 2565 log.go:172] (0xc000a8b600) Reply frame received for 3\nI0524 00:19:27.680604 2565 log.go:172] (0xc000a8b600) (0xc000820c80) Create stream\nI0524 00:19:27.680638 2565 log.go:172] (0xc000a8b600) (0xc000820c80) Stream added, broadcasting: 5\nI0524 00:19:27.681679 2565 log.go:172] (0xc000a8b600) Reply frame received for 5\nI0524 00:19:27.732738 2565 log.go:172] (0xc000a8b600) Data frame received for 3\nI0524 00:19:27.732797 2565 log.go:172] (0xc0008274a0) (3) Data frame handling\nI0524 00:19:27.732820 2565 log.go:172] (0xc0008274a0) (3) Data frame sent\nI0524 00:19:27.732849 2565 log.go:172] (0xc000a8b600) Data frame received for 3\nI0524 00:19:27.732867 2565 log.go:172] (0xc0008274a0) (3) Data frame handling\nI0524 00:19:27.732895 2565 log.go:172] (0xc000a8b600) Data frame received for 5\nI0524 00:19:27.732921 2565 log.go:172] (0xc000820c80) (5) Data frame handling\nI0524 00:19:27.732953 2565 log.go:172] (0xc000820c80) (5) Data frame sent\nI0524 00:19:27.732966 2565 log.go:172] (0xc000a8b600) Data frame received for 5\nI0524 00:19:27.732977 2565 log.go:172] (0xc000820c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 00:19:27.734916 2565 log.go:172] (0xc000a8b600) Data frame received for 1\nI0524 00:19:27.734960 2565 log.go:172] (0xc0008325a0) (1) Data frame handling\nI0524 00:19:27.734988 2565 log.go:172] (0xc0008325a0) (1) Data frame sent\nI0524 00:19:27.735020 2565 log.go:172] (0xc000a8b600) (0xc0008325a0) Stream removed, broadcasting: 1\nI0524 00:19:27.735073 2565 log.go:172] (0xc000a8b600) Go away received\nI0524 00:19:27.735509 2565 log.go:172] (0xc000a8b600) (0xc0008325a0) Stream removed, broadcasting: 1\nI0524 00:19:27.735536 2565 log.go:172] (0xc000a8b600) (0xc0008274a0) Stream removed, broadcasting: 3\nI0524 00:19:27.735555 2565 log.go:172] (0xc000a8b600) (0xc000820c80) Stream removed, broadcasting: 5\n" May 24 00:19:27.740: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 00:19:27.740: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 00:19:27.740: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 24 00:19:57.762: INFO: Deleting all statefulset in ns statefulset-9917 May 24 00:19:57.765: INFO: Scaling statefulset ss to 0 May 24 00:19:57.773: INFO: Waiting for statefulset status.replicas updated to 0 May 24 00:19:57.775: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:19:57.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9917" for this suite. • [SLOW TEST:92.560 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":288,"completed":160,"skipped":2670,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:19:57.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 24 00:20:05.947: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 00:20:06.001: INFO: Pod pod-with-prestop-exec-hook still exists May 24 00:20:08.001: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 00:20:08.006: INFO: Pod pod-with-prestop-exec-hook still exists May 24 00:20:10.001: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 00:20:10.005: INFO: Pod pod-with-prestop-exec-hook still exists May 24 00:20:12.001: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 00:20:12.006: INFO: Pod pod-with-prestop-exec-hook still exists May 24 00:20:14.001: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 00:20:14.007: INFO: Pod pod-with-prestop-exec-hook still exists May 24 00:20:16.001: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 24 00:20:16.005: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:20:16.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2802" for this suite. • [SLOW TEST:18.237 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":288,"completed":161,"skipped":2688,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:20:16.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token May 24 00:20:16.696: INFO: created pod pod-service-account-defaultsa May 24 00:20:16.696: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 24 00:20:16.709: INFO: created pod pod-service-account-mountsa May 24 00:20:16.709: INFO: pod pod-service-account-mountsa service account token volume mount: true May 24 00:20:16.785: INFO: created pod pod-service-account-nomountsa May 24 00:20:16.785: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 24 00:20:16.798: INFO: created pod pod-service-account-defaultsa-mountspec May 24 00:20:16.798: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 24 00:20:16.835: INFO: created pod pod-service-account-mountsa-mountspec May 24 00:20:16.835: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 24 00:20:16.859: INFO: created pod pod-service-account-nomountsa-mountspec May 24 00:20:16.859: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 24 00:20:16.923: INFO: created pod pod-service-account-defaultsa-nomountspec May 24 00:20:16.923: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 24 00:20:16.962: INFO: created pod pod-service-account-mountsa-nomountspec May 24 00:20:16.962: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 24 00:20:16.999: INFO: created pod pod-service-account-nomountsa-nomountspec May 24 00:20:16.999: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:20:16.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-158" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":288,"completed":162,"skipped":2714,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:20:17.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs May 24 00:20:17.345: INFO: Waiting up to 5m0s for pod "pod-73a04e17-6d95-4d11-b7f6-c86bc1e4313e" in namespace "emptydir-5305" to be "Succeeded or Failed" May 24 00:20:17.377: INFO: Pod "pod-73a04e17-6d95-4d11-b7f6-c86bc1e4313e": Phase="Pending", Reason="", readiness=false. Elapsed: 32.824241ms May 24 00:20:19.407: INFO: Pod "pod-73a04e17-6d95-4d11-b7f6-c86bc1e4313e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062748278s May 24 00:20:21.479: INFO: Pod "pod-73a04e17-6d95-4d11-b7f6-c86bc1e4313e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134405011s May 24 00:20:23.528: INFO: Pod "pod-73a04e17-6d95-4d11-b7f6-c86bc1e4313e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.182985093s May 24 00:20:25.738: INFO: Pod "pod-73a04e17-6d95-4d11-b7f6-c86bc1e4313e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.39380991s May 24 00:20:28.007: INFO: Pod "pod-73a04e17-6d95-4d11-b7f6-c86bc1e4313e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.66199813s May 24 00:20:30.055: INFO: Pod "pod-73a04e17-6d95-4d11-b7f6-c86bc1e4313e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.710083888s STEP: Saw pod success May 24 00:20:30.055: INFO: Pod "pod-73a04e17-6d95-4d11-b7f6-c86bc1e4313e" satisfied condition "Succeeded or Failed" May 24 00:20:30.138: INFO: Trying to get logs from node latest-worker pod pod-73a04e17-6d95-4d11-b7f6-c86bc1e4313e container test-container: STEP: delete the pod May 24 00:20:30.209: INFO: Waiting for pod pod-73a04e17-6d95-4d11-b7f6-c86bc1e4313e to disappear May 24 00:20:30.407: INFO: Pod pod-73a04e17-6d95-4d11-b7f6-c86bc1e4313e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:20:30.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5305" for this suite. • [SLOW TEST:13.311 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":163,"skipped":2720,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:20:30.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2067 May 24 00:20:37.194: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2067 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 24 00:20:37.540: INFO: stderr: "I0524 00:20:37.313865 2587 log.go:172] (0xc000a75340) (0xc0006cce60) Create stream\nI0524 00:20:37.313902 2587 log.go:172] (0xc000a75340) (0xc0006cce60) Stream added, broadcasting: 1\nI0524 00:20:37.318229 2587 log.go:172] (0xc000a75340) Reply frame received for 1\nI0524 00:20:37.318261 2587 log.go:172] (0xc000a75340) (0xc0006c54a0) Create stream\nI0524 00:20:37.318276 2587 log.go:172] (0xc000a75340) (0xc0006c54a0) Stream added, broadcasting: 3\nI0524 00:20:37.319219 2587 log.go:172] (0xc000a75340) Reply frame received for 3\nI0524 00:20:37.319246 2587 log.go:172] (0xc000a75340) (0xc0006b8a00) Create stream\nI0524 00:20:37.319255 2587 log.go:172] (0xc000a75340) (0xc0006b8a00) Stream added, broadcasting: 5\nI0524 00:20:37.320197 2587 log.go:172] (0xc000a75340) Reply frame received for 5\nI0524 00:20:37.525657 2587 log.go:172] (0xc000a75340) Data frame received for 5\nI0524 00:20:37.525692 2587 log.go:172] (0xc0006b8a00) (5) Data frame handling\nI0524 00:20:37.525712 2587 log.go:172] (0xc0006b8a00) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0524 00:20:37.531690 2587 log.go:172] (0xc000a75340) Data frame received for 3\nI0524 00:20:37.531720 2587 log.go:172] (0xc0006c54a0) (3) Data frame handling\nI0524 00:20:37.531742 2587 log.go:172] (0xc0006c54a0) (3) Data frame sent\nI0524 00:20:37.531898 2587 log.go:172] (0xc000a75340) Data frame received for 5\nI0524 00:20:37.531917 2587 log.go:172] (0xc0006b8a00) (5) Data frame handling\nI0524 00:20:37.532134 2587 log.go:172] (0xc000a75340) Data frame received for 3\nI0524 00:20:37.532161 2587 log.go:172] (0xc0006c54a0) (3) Data frame handling\nI0524 00:20:37.533890 2587 log.go:172] (0xc000a75340) Data frame received for 1\nI0524 00:20:37.533910 2587 log.go:172] (0xc0006cce60) (1) Data frame handling\nI0524 00:20:37.533921 2587 log.go:172] (0xc0006cce60) (1) Data frame sent\nI0524 00:20:37.533937 2587 log.go:172] (0xc000a75340) (0xc0006cce60) Stream removed, broadcasting: 1\nI0524 00:20:37.534197 2587 log.go:172] (0xc000a75340) Go away received\nI0524 00:20:37.534253 2587 log.go:172] (0xc000a75340) (0xc0006cce60) Stream removed, broadcasting: 1\nI0524 00:20:37.534285 2587 log.go:172] (0xc000a75340) (0xc0006c54a0) Stream removed, broadcasting: 3\nI0524 00:20:37.534301 2587 log.go:172] (0xc000a75340) (0xc0006b8a00) Stream removed, broadcasting: 5\n" May 24 00:20:37.540: INFO: stdout: "iptables" May 24 00:20:37.540: INFO: proxyMode: iptables May 24 00:20:37.545: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 24 00:20:37.584: INFO: Pod kube-proxy-mode-detector still exists May 24 00:20:39.584: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 24 00:20:39.588: INFO: Pod kube-proxy-mode-detector still exists May 24 00:20:41.584: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 24 00:20:41.588: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-2067 STEP: creating replication controller affinity-clusterip-timeout in namespace services-2067 I0524 00:20:41.631615 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-2067, replica count: 3 I0524 00:20:44.681947 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 00:20:47.682219 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 00:20:47.689: INFO: Creating new exec pod May 24 00:20:52.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2067 execpod-affinitykhnfv -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' May 24 00:20:52.990: INFO: stderr: "I0524 00:20:52.889719 2608 log.go:172] (0xc000b19290) (0xc00068f0e0) Create stream\nI0524 00:20:52.889783 2608 log.go:172] (0xc000b19290) (0xc00068f0e0) Stream added, broadcasting: 1\nI0524 00:20:52.892072 2608 log.go:172] (0xc000b19290) Reply frame received for 1\nI0524 00:20:52.892133 2608 log.go:172] (0xc000b19290) (0xc00051bd60) Create stream\nI0524 00:20:52.892157 2608 log.go:172] (0xc000b19290) (0xc00051bd60) Stream added, broadcasting: 3\nI0524 00:20:52.893981 2608 log.go:172] (0xc000b19290) Reply frame received for 3\nI0524 00:20:52.894034 2608 log.go:172] (0xc000b19290) (0xc00068f680) Create stream\nI0524 00:20:52.894052 2608 log.go:172] (0xc000b19290) (0xc00068f680) Stream added, broadcasting: 5\nI0524 00:20:52.895871 2608 log.go:172] (0xc000b19290) Reply frame received for 5\nI0524 00:20:52.982293 2608 log.go:172] (0xc000b19290) Data frame received for 5\nI0524 00:20:52.982334 2608 log.go:172] (0xc00068f680) (5) Data frame handling\nI0524 00:20:52.982368 2608 log.go:172] (0xc00068f680) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nI0524 00:20:52.982617 2608 log.go:172] (0xc000b19290) Data frame received for 5\nI0524 00:20:52.982638 2608 log.go:172] (0xc00068f680) (5) Data frame handling\nI0524 00:20:52.982656 2608 log.go:172] (0xc00068f680) (5) Data frame sent\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0524 00:20:52.983344 2608 log.go:172] (0xc000b19290) Data frame received for 3\nI0524 00:20:52.983378 2608 log.go:172] (0xc00051bd60) (3) Data frame handling\nI0524 00:20:52.983408 2608 log.go:172] (0xc000b19290) Data frame received for 5\nI0524 00:20:52.983426 2608 log.go:172] (0xc00068f680) (5) Data frame handling\nI0524 00:20:52.985411 2608 log.go:172] (0xc000b19290) Data frame received for 1\nI0524 00:20:52.985448 2608 log.go:172] (0xc00068f0e0) (1) Data frame handling\nI0524 00:20:52.985489 2608 log.go:172] (0xc00068f0e0) (1) Data frame sent\nI0524 00:20:52.985525 2608 log.go:172] (0xc000b19290) (0xc00068f0e0) Stream removed, broadcasting: 1\nI0524 00:20:52.985552 2608 log.go:172] (0xc000b19290) Go away received\nI0524 00:20:52.985916 2608 log.go:172] (0xc000b19290) (0xc00068f0e0) Stream removed, broadcasting: 1\nI0524 00:20:52.985938 2608 log.go:172] (0xc000b19290) (0xc00051bd60) Stream removed, broadcasting: 3\nI0524 00:20:52.985951 2608 log.go:172] (0xc000b19290) (0xc00068f680) Stream removed, broadcasting: 5\n" May 24 00:20:52.990: INFO: stdout: "" May 24 00:20:52.991: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2067 execpod-affinitykhnfv -- /bin/sh -x -c nc -zv -t -w 2 10.99.80.150 80' May 24 00:20:53.200: INFO: stderr: "I0524 00:20:53.127376 2627 log.go:172] (0xc000b50210) (0xc00064bcc0) Create stream\nI0524 00:20:53.127459 2627 log.go:172] (0xc000b50210) (0xc00064bcc0) Stream added, broadcasting: 1\nI0524 00:20:53.130235 2627 log.go:172] (0xc000b50210) Reply frame received for 1\nI0524 00:20:53.130268 2627 log.go:172] (0xc000b50210) (0xc000520320) Create stream\nI0524 00:20:53.130278 2627 log.go:172] (0xc000b50210) (0xc000520320) Stream added, broadcasting: 3\nI0524 00:20:53.131284 2627 log.go:172] (0xc000b50210) Reply frame received for 3\nI0524 00:20:53.131303 2627 log.go:172] (0xc000b50210) (0xc000310e60) Create stream\nI0524 00:20:53.131310 2627 log.go:172] (0xc000b50210) (0xc000310e60) Stream added, broadcasting: 5\nI0524 00:20:53.132190 2627 log.go:172] (0xc000b50210) Reply frame received for 5\nI0524 00:20:53.193372 2627 log.go:172] (0xc000b50210) Data frame received for 3\nI0524 00:20:53.193410 2627 log.go:172] (0xc000520320) (3) Data frame handling\nI0524 00:20:53.193511 2627 log.go:172] (0xc000b50210) Data frame received for 5\nI0524 00:20:53.193533 2627 log.go:172] (0xc000310e60) (5) Data frame handling\nI0524 00:20:53.193548 2627 log.go:172] (0xc000310e60) (5) Data frame sent\n+ nc -zv -t -w 2 10.99.80.150 80\nConnection to 10.99.80.150 80 port [tcp/http] succeeded!\nI0524 00:20:53.193582 2627 log.go:172] (0xc000b50210) Data frame received for 5\nI0524 00:20:53.193597 2627 log.go:172] (0xc000310e60) (5) Data frame handling\nI0524 00:20:53.194810 2627 log.go:172] (0xc000b50210) Data frame received for 1\nI0524 00:20:53.194829 2627 log.go:172] (0xc00064bcc0) (1) Data frame handling\nI0524 00:20:53.194842 2627 log.go:172] (0xc00064bcc0) (1) Data frame sent\nI0524 00:20:53.194856 2627 log.go:172] (0xc000b50210) (0xc00064bcc0) Stream removed, broadcasting: 1\nI0524 00:20:53.194907 2627 log.go:172] (0xc000b50210) Go away received\nI0524 00:20:53.195188 2627 log.go:172] (0xc000b50210) (0xc00064bcc0) Stream removed, broadcasting: 1\nI0524 00:20:53.195209 2627 log.go:172] (0xc000b50210) (0xc000520320) Stream removed, broadcasting: 3\nI0524 00:20:53.195227 2627 log.go:172] (0xc000b50210) (0xc000310e60) Stream removed, broadcasting: 5\n" May 24 00:20:53.200: INFO: stdout: "" May 24 00:20:53.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2067 execpod-affinitykhnfv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.99.80.150:80/ ; done' May 24 00:20:53.526: INFO: stderr: "I0524 00:20:53.339066 2648 log.go:172] (0xc000510d10) (0xc000b226e0) Create stream\nI0524 00:20:53.339125 2648 log.go:172] (0xc000510d10) (0xc000b226e0) Stream added, broadcasting: 1\nI0524 00:20:53.344652 2648 log.go:172] (0xc000510d10) Reply frame received for 1\nI0524 00:20:53.344699 2648 log.go:172] (0xc000510d10) (0xc000506280) Create stream\nI0524 00:20:53.344712 2648 log.go:172] (0xc000510d10) (0xc000506280) Stream added, broadcasting: 3\nI0524 00:20:53.345923 2648 log.go:172] (0xc000510d10) Reply frame received for 3\nI0524 00:20:53.345987 2648 log.go:172] (0xc000510d10) (0xc0004c0dc0) Create stream\nI0524 00:20:53.346000 2648 log.go:172] (0xc000510d10) (0xc0004c0dc0) Stream added, broadcasting: 5\nI0524 00:20:53.346897 2648 log.go:172] (0xc000510d10) Reply frame received for 5\nI0524 00:20:53.416880 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.416923 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.416982 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.417035 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.417051 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.417082 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\nI0524 00:20:53.420639 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.420663 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.420677 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.421050 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.421067 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.421075 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.421086 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.421091 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.421095 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.428001 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.428022 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.428039 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.428442 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.428471 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.428484 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.428503 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.428513 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.428526 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.433736 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.433756 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.433773 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.434148 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.434168 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.434176 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.434200 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.434209 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.434216 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.443611 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.443638 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.443655 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.443865 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.443877 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.443887 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -sI0524 00:20:53.443973 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.444008 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.444044 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.444058 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.444078 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.444090 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.451831 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.451851 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.451867 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.452696 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.452732 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.452754 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.452773 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.452781 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.452792 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.457854 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.457879 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.457894 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.458459 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.458478 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.458486 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.458514 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.458539 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.458565 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.462528 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.462543 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.462557 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.463095 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.463144 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.463162 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.463179 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.463192 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.463216 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.467117 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.467133 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.467143 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.467635 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.467652 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.467664 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.467683 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.467704 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.467722 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.472919 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.472937 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.472959 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.473452 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.473461 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.473468 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.473557 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.473564 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.473569 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.478309 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.478331 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.478349 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.479116 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.479134 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.479146 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.479166 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.479181 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.479194 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\nI0524 00:20:53.484721 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.484741 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.484764 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.485601 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.485621 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.485634 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.485653 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.485665 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.485676 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.495514 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.495538 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.495560 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.496321 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.496345 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.496353 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.496366 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.496371 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.496377 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.502953 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.502978 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.502998 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.503484 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.503521 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.503540 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.503563 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.503575 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.503595 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.507655 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.507669 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.507682 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.508091 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.508120 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.508134 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.508159 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.508170 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.508187 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.512809 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.512836 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.512853 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.513228 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.513317 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.513336 2648 log.go:172] (0xc0004c0dc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.513351 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.513359 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.513372 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.519163 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.519180 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.519190 2648 log.go:172] (0xc000506280) (3) Data frame sent\nI0524 00:20:53.519893 2648 log.go:172] (0xc000510d10) Data frame received for 3\nI0524 00:20:53.519917 2648 log.go:172] (0xc000506280) (3) Data frame handling\nI0524 00:20:53.520007 2648 log.go:172] (0xc000510d10) Data frame received for 5\nI0524 00:20:53.520033 2648 log.go:172] (0xc0004c0dc0) (5) Data frame handling\nI0524 00:20:53.521807 2648 log.go:172] (0xc000510d10) Data frame received for 1\nI0524 00:20:53.521822 2648 log.go:172] (0xc000b226e0) (1) Data frame handling\nI0524 00:20:53.521836 2648 log.go:172] (0xc000b226e0) (1) Data frame sent\nI0524 00:20:53.521849 2648 log.go:172] (0xc000510d10) (0xc000b226e0) Stream removed, broadcasting: 1\nI0524 00:20:53.521858 2648 log.go:172] (0xc000510d10) Go away received\nI0524 00:20:53.522179 2648 log.go:172] (0xc000510d10) (0xc000b226e0) Stream removed, broadcasting: 1\nI0524 00:20:53.522192 2648 log.go:172] (0xc000510d10) (0xc000506280) Stream removed, broadcasting: 3\nI0524 00:20:53.522197 2648 log.go:172] (0xc000510d10) (0xc0004c0dc0) Stream removed, broadcasting: 5\n" May 24 00:20:53.526: INFO: stdout: "\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb\naffinity-clusterip-timeout-n2vsb" May 24 00:20:53.526: INFO: Received response from host: May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Received response from host: affinity-clusterip-timeout-n2vsb May 24 00:20:53.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2067 execpod-affinitykhnfv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.99.80.150:80/' May 24 00:20:53.751: INFO: stderr: "I0524 00:20:53.671775 2668 log.go:172] (0xc0000e8b00) (0xc000278dc0) Create stream\nI0524 00:20:53.671837 2668 log.go:172] (0xc0000e8b00) (0xc000278dc0) Stream added, broadcasting: 1\nI0524 00:20:53.674894 2668 log.go:172] (0xc0000e8b00) Reply frame received for 1\nI0524 00:20:53.674962 2668 log.go:172] (0xc0000e8b00) (0xc000356dc0) Create stream\nI0524 00:20:53.675039 2668 log.go:172] (0xc0000e8b00) (0xc000356dc0) Stream added, broadcasting: 3\nI0524 00:20:53.675811 2668 log.go:172] (0xc0000e8b00) Reply frame received for 3\nI0524 00:20:53.675838 2668 log.go:172] (0xc0000e8b00) (0xc00015f400) Create stream\nI0524 00:20:53.675849 2668 log.go:172] (0xc0000e8b00) (0xc00015f400) Stream added, broadcasting: 5\nI0524 00:20:53.676610 2668 log.go:172] (0xc0000e8b00) Reply frame received for 5\nI0524 00:20:53.740912 2668 log.go:172] (0xc0000e8b00) Data frame received for 5\nI0524 00:20:53.740948 2668 log.go:172] (0xc00015f400) (5) Data frame handling\nI0524 00:20:53.740970 2668 log.go:172] (0xc00015f400) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:20:53.743377 2668 log.go:172] (0xc0000e8b00) Data frame received for 3\nI0524 00:20:53.743399 2668 log.go:172] (0xc000356dc0) (3) Data frame handling\nI0524 00:20:53.743410 2668 log.go:172] (0xc000356dc0) (3) Data frame sent\nI0524 00:20:53.743800 2668 log.go:172] (0xc0000e8b00) Data frame received for 3\nI0524 00:20:53.743890 2668 log.go:172] (0xc000356dc0) (3) Data frame handling\nI0524 00:20:53.743921 2668 log.go:172] (0xc0000e8b00) Data frame received for 5\nI0524 00:20:53.743933 2668 log.go:172] (0xc00015f400) (5) Data frame handling\nI0524 00:20:53.746236 2668 log.go:172] (0xc0000e8b00) Data frame received for 1\nI0524 00:20:53.746260 2668 log.go:172] (0xc000278dc0) (1) Data frame handling\nI0524 00:20:53.746272 2668 log.go:172] (0xc000278dc0) (1) Data frame sent\nI0524 00:20:53.746288 2668 log.go:172] (0xc0000e8b00) (0xc000278dc0) Stream removed, broadcasting: 1\nI0524 00:20:53.746314 2668 log.go:172] (0xc0000e8b00) Go away received\nI0524 00:20:53.746837 2668 log.go:172] (0xc0000e8b00) (0xc000278dc0) Stream removed, broadcasting: 1\nI0524 00:20:53.746873 2668 log.go:172] (0xc0000e8b00) (0xc000356dc0) Stream removed, broadcasting: 3\nI0524 00:20:53.746900 2668 log.go:172] (0xc0000e8b00) (0xc00015f400) Stream removed, broadcasting: 5\n" May 24 00:20:53.751: INFO: stdout: "affinity-clusterip-timeout-n2vsb" May 24 00:21:08.751: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-2067 execpod-affinitykhnfv -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.99.80.150:80/' May 24 00:21:09.013: INFO: stderr: "I0524 00:21:08.894574 2688 log.go:172] (0xc0009c94a0) (0xc000c48460) Create stream\nI0524 00:21:08.894629 2688 log.go:172] (0xc0009c94a0) (0xc000c48460) Stream added, broadcasting: 1\nI0524 00:21:08.900553 2688 log.go:172] (0xc0009c94a0) Reply frame received for 1\nI0524 00:21:08.900593 2688 log.go:172] (0xc0009c94a0) (0xc0006ecaa0) Create stream\nI0524 00:21:08.900603 2688 log.go:172] (0xc0009c94a0) (0xc0006ecaa0) Stream added, broadcasting: 3\nI0524 00:21:08.901947 2688 log.go:172] (0xc0009c94a0) Reply frame received for 3\nI0524 00:21:08.902004 2688 log.go:172] (0xc0009c94a0) (0xc000642280) Create stream\nI0524 00:21:08.902020 2688 log.go:172] (0xc0009c94a0) (0xc000642280) Stream added, broadcasting: 5\nI0524 00:21:08.902861 2688 log.go:172] (0xc0009c94a0) Reply frame received for 5\nI0524 00:21:08.997523 2688 log.go:172] (0xc0009c94a0) Data frame received for 5\nI0524 00:21:08.997549 2688 log.go:172] (0xc000642280) (5) Data frame handling\nI0524 00:21:08.997567 2688 log.go:172] (0xc000642280) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.99.80.150:80/\nI0524 00:21:09.004250 2688 log.go:172] (0xc0009c94a0) Data frame received for 3\nI0524 00:21:09.004276 2688 log.go:172] (0xc0006ecaa0) (3) Data frame handling\nI0524 00:21:09.004294 2688 log.go:172] (0xc0006ecaa0) (3) Data frame sent\nI0524 00:21:09.005018 2688 log.go:172] (0xc0009c94a0) Data frame received for 3\nI0524 00:21:09.005037 2688 log.go:172] (0xc0006ecaa0) (3) Data frame handling\nI0524 00:21:09.005055 2688 log.go:172] (0xc0009c94a0) Data frame received for 5\nI0524 00:21:09.005080 2688 log.go:172] (0xc000642280) (5) Data frame handling\nI0524 00:21:09.007744 2688 log.go:172] (0xc0009c94a0) Data frame received for 1\nI0524 00:21:09.007784 2688 log.go:172] (0xc000c48460) (1) Data frame handling\nI0524 00:21:09.007816 2688 log.go:172] (0xc000c48460) (1) Data frame sent\nI0524 00:21:09.007842 2688 log.go:172] (0xc0009c94a0) (0xc000c48460) Stream removed, broadcasting: 1\nI0524 00:21:09.007922 2688 log.go:172] (0xc0009c94a0) Go away received\nI0524 00:21:09.008367 2688 log.go:172] (0xc0009c94a0) (0xc000c48460) Stream removed, broadcasting: 1\nI0524 00:21:09.008398 2688 log.go:172] (0xc0009c94a0) (0xc0006ecaa0) Stream removed, broadcasting: 3\nI0524 00:21:09.008416 2688 log.go:172] (0xc0009c94a0) (0xc000642280) Stream removed, broadcasting: 5\n" May 24 00:21:09.013: INFO: stdout: "affinity-clusterip-timeout-pjp42" May 24 00:21:09.013: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-2067, will wait for the garbage collector to delete the pods May 24 00:21:09.155: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.868459ms May 24 00:21:09.655: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 500.298957ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:21:25.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2067" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:54.997 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":164,"skipped":2741,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:21:25.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-54c95c53-bf46-43bb-b279-f2db922ed124 STEP: Creating a pod to test consume configMaps May 24 00:21:25.531: INFO: Waiting up to 5m0s for pod "pod-configmaps-29e21a37-70cf-455d-ba49-b82259884306" in namespace "configmap-2042" to be "Succeeded or Failed" May 24 00:21:25.535: INFO: Pod "pod-configmaps-29e21a37-70cf-455d-ba49-b82259884306": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0348ms May 24 00:21:27.539: INFO: Pod "pod-configmaps-29e21a37-70cf-455d-ba49-b82259884306": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007718174s May 24 00:21:29.551: INFO: Pod "pod-configmaps-29e21a37-70cf-455d-ba49-b82259884306": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020150558s STEP: Saw pod success May 24 00:21:29.551: INFO: Pod "pod-configmaps-29e21a37-70cf-455d-ba49-b82259884306" satisfied condition "Succeeded or Failed" May 24 00:21:29.554: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-29e21a37-70cf-455d-ba49-b82259884306 container configmap-volume-test: STEP: delete the pod May 24 00:21:29.573: INFO: Waiting for pod pod-configmaps-29e21a37-70cf-455d-ba49-b82259884306 to disappear May 24 00:21:29.577: INFO: Pod pod-configmaps-29e21a37-70cf-455d-ba49-b82259884306 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:21:29.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2042" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":165,"skipped":2752,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:21:29.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:21:30.964: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:21:32.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876490, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876490, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876491, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876490, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:21:36.072: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:21:36.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7793" for this suite. STEP: Destroying namespace "webhook-7793-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.168 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":288,"completed":166,"skipped":2754,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:21:36.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars May 24 00:21:36.856: INFO: Waiting up to 5m0s for pod "downward-api-34b64f35-9fb1-4a93-94d0-d5426594ad1f" in namespace "downward-api-142" to be "Succeeded or Failed" May 24 00:21:36.875: INFO: Pod "downward-api-34b64f35-9fb1-4a93-94d0-d5426594ad1f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.934167ms May 24 00:21:38.879: INFO: Pod "downward-api-34b64f35-9fb1-4a93-94d0-d5426594ad1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022894712s May 24 00:21:40.910: INFO: Pod "downward-api-34b64f35-9fb1-4a93-94d0-d5426594ad1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053924149s STEP: Saw pod success May 24 00:21:40.911: INFO: Pod "downward-api-34b64f35-9fb1-4a93-94d0-d5426594ad1f" satisfied condition "Succeeded or Failed" May 24 00:21:40.913: INFO: Trying to get logs from node latest-worker2 pod downward-api-34b64f35-9fb1-4a93-94d0-d5426594ad1f container dapi-container: STEP: delete the pod May 24 00:21:40.951: INFO: Waiting for pod downward-api-34b64f35-9fb1-4a93-94d0-d5426594ad1f to disappear May 24 00:21:40.958: INFO: Pod downward-api-34b64f35-9fb1-4a93-94d0-d5426594ad1f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:21:40.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-142" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":288,"completed":167,"skipped":2764,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:21:40.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:21:41.000: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2296 I0524 00:21:41.012244 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2296, replica count: 1 I0524 00:21:42.062641 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 00:21:43.062936 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 00:21:44.063225 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 00:21:45.063454 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 00:21:45.196: INFO: Created: latency-svc-n9b7b May 24 00:21:45.213: INFO: Got endpoints: latency-svc-n9b7b [50.132324ms] May 24 00:21:45.276: INFO: Created: latency-svc-5b7j6 May 24 00:21:45.280: INFO: Got endpoints: latency-svc-5b7j6 [66.742744ms] May 24 00:21:45.353: INFO: Created: latency-svc-xjbb6 May 24 00:21:45.414: INFO: Got endpoints: latency-svc-xjbb6 [200.298554ms] May 24 00:21:45.454: INFO: Created: latency-svc-tzpt6 May 24 00:21:45.472: INFO: Got endpoints: latency-svc-tzpt6 [258.543473ms] May 24 00:21:45.496: INFO: Created: latency-svc-2wgmd May 24 00:21:45.513: INFO: Got endpoints: latency-svc-2wgmd [299.282727ms] May 24 00:21:45.576: INFO: Created: latency-svc-7xxvk May 24 00:21:45.623: INFO: Got endpoints: latency-svc-7xxvk [409.187857ms] May 24 00:21:45.623: INFO: Created: latency-svc-wv7jm May 24 00:21:45.653: INFO: Got endpoints: latency-svc-wv7jm [439.289532ms] May 24 00:21:45.707: INFO: Created: latency-svc-7w5j4 May 24 00:21:45.724: INFO: Got endpoints: latency-svc-7w5j4 [510.195328ms] May 24 00:21:45.755: INFO: Created: latency-svc-c657d May 24 00:21:45.772: INFO: Got endpoints: latency-svc-c657d [558.505072ms] May 24 00:21:45.797: INFO: Created: latency-svc-w697h May 24 00:21:45.863: INFO: Got endpoints: latency-svc-w697h [649.77314ms] May 24 00:21:45.903: INFO: Created: latency-svc-pzcjw May 24 00:21:45.925: INFO: Got endpoints: latency-svc-pzcjw [711.466543ms] May 24 00:21:46.013: INFO: Created: latency-svc-xbm4d May 24 00:21:46.037: INFO: Created: latency-svc-87lck May 24 00:21:46.038: INFO: Got endpoints: latency-svc-xbm4d [824.140913ms] May 24 00:21:46.079: INFO: Got endpoints: latency-svc-87lck [864.67405ms] May 24 00:21:46.161: INFO: Created: latency-svc-pjzsr May 24 00:21:46.196: INFO: Got endpoints: latency-svc-pjzsr [982.021889ms] May 24 00:21:46.270: INFO: Created: latency-svc-22dbv May 24 00:21:46.283: INFO: Got endpoints: latency-svc-22dbv [1.068866314s] May 24 00:21:46.330: INFO: Created: latency-svc-nsbfv May 24 00:21:46.345: INFO: Got endpoints: latency-svc-nsbfv [1.131097118s] May 24 00:21:46.436: INFO: Created: latency-svc-r5prm May 24 00:21:46.436: INFO: Got endpoints: latency-svc-r5prm [1.155415796s] May 24 00:21:46.492: INFO: Created: latency-svc-dv5mb May 24 00:21:46.507: INFO: Got endpoints: latency-svc-dv5mb [1.092803419s] May 24 00:21:46.594: INFO: Created: latency-svc-rsbnn May 24 00:21:46.611: INFO: Got endpoints: latency-svc-rsbnn [1.13917158s] May 24 00:21:46.657: INFO: Created: latency-svc-24txb May 24 00:21:46.676: INFO: Got endpoints: latency-svc-24txb [1.163297315s] May 24 00:21:46.725: INFO: Created: latency-svc-6frv5 May 24 00:21:46.773: INFO: Got endpoints: latency-svc-6frv5 [1.150654524s] May 24 00:21:46.775: INFO: Created: latency-svc-8q2nr May 24 00:21:46.815: INFO: Got endpoints: latency-svc-8q2nr [1.162471315s] May 24 00:21:46.875: INFO: Created: latency-svc-vw95c May 24 00:21:46.882: INFO: Got endpoints: latency-svc-vw95c [1.158091468s] May 24 00:21:46.912: INFO: Created: latency-svc-hwnnz May 24 00:21:46.947: INFO: Got endpoints: latency-svc-hwnnz [1.175179784s] May 24 00:21:47.033: INFO: Created: latency-svc-hh7sq May 24 00:21:47.064: INFO: Got endpoints: latency-svc-hh7sq [1.200447275s] May 24 00:21:47.151: INFO: Created: latency-svc-gktxp May 24 00:21:47.162: INFO: Got endpoints: latency-svc-gktxp [1.23713972s] May 24 00:21:47.218: INFO: Created: latency-svc-g4gg7 May 24 00:21:47.230: INFO: Got endpoints: latency-svc-g4gg7 [1.192784732s] May 24 00:21:47.351: INFO: Created: latency-svc-f595f May 24 00:21:47.363: INFO: Got endpoints: latency-svc-f595f [1.284559474s] May 24 00:21:47.403: INFO: Created: latency-svc-p7kn2 May 24 00:21:47.429: INFO: Got endpoints: latency-svc-p7kn2 [1.233055995s] May 24 00:21:47.522: INFO: Created: latency-svc-9ff6v May 24 00:21:47.537: INFO: Got endpoints: latency-svc-9ff6v [1.254571122s] May 24 00:21:47.565: INFO: Created: latency-svc-86xvq May 24 00:21:47.579: INFO: Got endpoints: latency-svc-86xvq [1.234574533s] May 24 00:21:47.613: INFO: Created: latency-svc-lrkxr May 24 00:21:47.690: INFO: Got endpoints: latency-svc-lrkxr [1.253873162s] May 24 00:21:47.694: INFO: Created: latency-svc-hdgnb May 24 00:21:47.701: INFO: Got endpoints: latency-svc-hdgnb [1.194289318s] May 24 00:21:47.729: INFO: Created: latency-svc-rh7xr May 24 00:21:47.744: INFO: Got endpoints: latency-svc-rh7xr [1.132218859s] May 24 00:21:47.872: INFO: Created: latency-svc-4gm4t May 24 00:21:47.897: INFO: Got endpoints: latency-svc-4gm4t [1.220982486s] May 24 00:21:47.956: INFO: Created: latency-svc-v52mn May 24 00:21:47.967: INFO: Got endpoints: latency-svc-v52mn [1.193446366s] May 24 00:21:48.031: INFO: Created: latency-svc-pw9nt May 24 00:21:48.063: INFO: Got endpoints: latency-svc-pw9nt [1.247211372s] May 24 00:21:48.093: INFO: Created: latency-svc-sk95n May 24 00:21:48.111: INFO: Got endpoints: latency-svc-sk95n [1.228804122s] May 24 00:21:48.193: INFO: Created: latency-svc-wqqm5 May 24 00:21:48.203: INFO: Got endpoints: latency-svc-wqqm5 [1.255417648s] May 24 00:21:48.267: INFO: Created: latency-svc-q6bph May 24 00:21:48.285: INFO: Got endpoints: latency-svc-q6bph [1.221339586s] May 24 00:21:48.343: INFO: Created: latency-svc-4d8h6 May 24 00:21:48.367: INFO: Got endpoints: latency-svc-4d8h6 [1.204946516s] May 24 00:21:48.397: INFO: Created: latency-svc-kl5nm May 24 00:21:48.435: INFO: Got endpoints: latency-svc-kl5nm [1.204252325s] May 24 00:21:48.498: INFO: Created: latency-svc-qwzpn May 24 00:21:48.508: INFO: Got endpoints: latency-svc-qwzpn [1.144532866s] May 24 00:21:48.536: INFO: Created: latency-svc-7975g May 24 00:21:48.560: INFO: Got endpoints: latency-svc-7975g [1.130654127s] May 24 00:21:48.596: INFO: Created: latency-svc-4qzcj May 24 00:21:48.660: INFO: Got endpoints: latency-svc-4qzcj [1.12282485s] May 24 00:21:48.693: INFO: Created: latency-svc-58fww May 24 00:21:48.728: INFO: Got endpoints: latency-svc-58fww [1.147984029s] May 24 00:21:48.803: INFO: Created: latency-svc-wdztx May 24 00:21:48.807: INFO: Got endpoints: latency-svc-wdztx [1.117824005s] May 24 00:21:48.844: INFO: Created: latency-svc-smj95 May 24 00:21:48.867: INFO: Got endpoints: latency-svc-smj95 [1.165995524s] May 24 00:21:48.992: INFO: Created: latency-svc-f7zq6 May 24 00:21:49.000: INFO: Got endpoints: latency-svc-f7zq6 [1.256268602s] May 24 00:21:49.037: INFO: Created: latency-svc-hr5dt May 24 00:21:49.066: INFO: Got endpoints: latency-svc-hr5dt [1.16910804s] May 24 00:21:49.090: INFO: Created: latency-svc-69g9j May 24 00:21:49.151: INFO: Got endpoints: latency-svc-69g9j [1.183567373s] May 24 00:21:49.165: INFO: Created: latency-svc-skqkl May 24 00:21:49.180: INFO: Got endpoints: latency-svc-skqkl [1.11733923s] May 24 00:21:49.234: INFO: Created: latency-svc-kcwcv May 24 00:21:49.331: INFO: Got endpoints: latency-svc-kcwcv [1.219767363s] May 24 00:21:49.370: INFO: Created: latency-svc-5dhgh May 24 00:21:49.385: INFO: Got endpoints: latency-svc-5dhgh [1.182459876s] May 24 00:21:49.412: INFO: Created: latency-svc-phnrw May 24 00:21:49.498: INFO: Got endpoints: latency-svc-phnrw [1.212744552s] May 24 00:21:49.521: INFO: Created: latency-svc-9vtjx May 24 00:21:49.530: INFO: Got endpoints: latency-svc-9vtjx [1.162490376s] May 24 00:21:49.557: INFO: Created: latency-svc-8v7pt May 24 00:21:49.567: INFO: Got endpoints: latency-svc-8v7pt [1.13217642s] May 24 00:21:49.667: INFO: Created: latency-svc-hk924 May 24 00:21:49.668: INFO: Got endpoints: latency-svc-hk924 [1.160039163s] May 24 00:21:49.700: INFO: Created: latency-svc-x6c8v May 24 00:21:49.712: INFO: Got endpoints: latency-svc-x6c8v [1.152079s] May 24 00:21:49.811: INFO: Created: latency-svc-jnqkx May 24 00:21:49.817: INFO: Got endpoints: latency-svc-jnqkx [1.157089848s] May 24 00:21:49.868: INFO: Created: latency-svc-xprn5 May 24 00:21:49.886: INFO: Got endpoints: latency-svc-xprn5 [1.158423935s] May 24 00:21:49.947: INFO: Created: latency-svc-flbxq May 24 00:21:49.952: INFO: Got endpoints: latency-svc-flbxq [1.144694083s] May 24 00:21:50.000: INFO: Created: latency-svc-42dxl May 24 00:21:50.031: INFO: Got endpoints: latency-svc-42dxl [1.163708595s] May 24 00:21:50.114: INFO: Created: latency-svc-n6xds May 24 00:21:50.145: INFO: Got endpoints: latency-svc-n6xds [1.144994208s] May 24 00:21:50.146: INFO: Created: latency-svc-cffkk May 24 00:21:50.174: INFO: Got endpoints: latency-svc-cffkk [1.107967675s] May 24 00:21:50.264: INFO: Created: latency-svc-sdvbh May 24 00:21:50.270: INFO: Got endpoints: latency-svc-sdvbh [1.119821957s] May 24 00:21:50.325: INFO: Created: latency-svc-qdrdt May 24 00:21:50.407: INFO: Got endpoints: latency-svc-qdrdt [1.227221937s] May 24 00:21:50.432: INFO: Created: latency-svc-497rc May 24 00:21:50.446: INFO: Got endpoints: latency-svc-497rc [1.115264203s] May 24 00:21:50.547: INFO: Created: latency-svc-4jdzg May 24 00:21:50.588: INFO: Got endpoints: latency-svc-4jdzg [1.202336128s] May 24 00:21:50.618: INFO: Created: latency-svc-k7wnj May 24 00:21:50.632: INFO: Got endpoints: latency-svc-k7wnj [1.13434802s] May 24 00:21:50.708: INFO: Created: latency-svc-glm58 May 24 00:21:50.711: INFO: Got endpoints: latency-svc-glm58 [1.180958524s] May 24 00:21:50.805: INFO: Created: latency-svc-496fz May 24 00:21:50.851: INFO: Got endpoints: latency-svc-496fz [1.283932089s] May 24 00:21:50.870: INFO: Created: latency-svc-njzn9 May 24 00:21:50.886: INFO: Got endpoints: latency-svc-njzn9 [1.217935381s] May 24 00:21:50.925: INFO: Created: latency-svc-wjvkl May 24 00:21:50.941: INFO: Got endpoints: latency-svc-wjvkl [1.22893345s] May 24 00:21:51.007: INFO: Created: latency-svc-28zm2 May 24 00:21:51.010: INFO: Got endpoints: latency-svc-28zm2 [1.193068477s] May 24 00:21:51.050: INFO: Created: latency-svc-jqncl May 24 00:21:51.086: INFO: Got endpoints: latency-svc-jqncl [1.199712276s] May 24 00:21:51.152: INFO: Created: latency-svc-tc5ts May 24 00:21:51.163: INFO: Got endpoints: latency-svc-tc5ts [1.211016208s] May 24 00:21:51.189: INFO: Created: latency-svc-ml7s5 May 24 00:21:51.206: INFO: Got endpoints: latency-svc-ml7s5 [1.175087928s] May 24 00:21:51.231: INFO: Created: latency-svc-2dtbg May 24 00:21:51.242: INFO: Got endpoints: latency-svc-2dtbg [1.097323298s] May 24 00:21:51.298: INFO: Created: latency-svc-l27vm May 24 00:21:51.303: INFO: Got endpoints: latency-svc-l27vm [1.128108943s] May 24 00:21:51.326: INFO: Created: latency-svc-d5st6 May 24 00:21:51.335: INFO: Got endpoints: latency-svc-d5st6 [1.064692743s] May 24 00:21:51.374: INFO: Created: latency-svc-plpmv May 24 00:21:51.414: INFO: Got endpoints: latency-svc-plpmv [1.006324415s] May 24 00:21:51.495: INFO: Created: latency-svc-5p9ng May 24 00:21:51.540: INFO: Got endpoints: latency-svc-5p9ng [1.093641672s] May 24 00:21:51.553: INFO: Created: latency-svc-v7qpd May 24 00:21:51.564: INFO: Got endpoints: latency-svc-v7qpd [976.545542ms] May 24 00:21:51.591: INFO: Created: latency-svc-588nv May 24 00:21:51.613: INFO: Got endpoints: latency-svc-588nv [980.935558ms] May 24 00:21:51.638: INFO: Created: latency-svc-2nlps May 24 00:21:51.696: INFO: Got endpoints: latency-svc-2nlps [984.834783ms] May 24 00:21:51.718: INFO: Created: latency-svc-kcmb9 May 24 00:21:51.734: INFO: Got endpoints: latency-svc-kcmb9 [883.353184ms] May 24 00:21:51.758: INFO: Created: latency-svc-5zhdx May 24 00:21:51.776: INFO: Got endpoints: latency-svc-5zhdx [889.99173ms] May 24 00:21:51.833: INFO: Created: latency-svc-mqcgq May 24 00:21:51.836: INFO: Got endpoints: latency-svc-mqcgq [895.170838ms] May 24 00:21:51.889: INFO: Created: latency-svc-59vgr May 24 00:21:51.909: INFO: Got endpoints: latency-svc-59vgr [898.56304ms] May 24 00:21:51.959: INFO: Created: latency-svc-tswf5 May 24 00:21:51.963: INFO: Got endpoints: latency-svc-tswf5 [877.165965ms] May 24 00:21:51.986: INFO: Created: latency-svc-zzcsn May 24 00:21:52.006: INFO: Got endpoints: latency-svc-zzcsn [842.392505ms] May 24 00:21:52.040: INFO: Created: latency-svc-44grc May 24 00:21:52.162: INFO: Got endpoints: latency-svc-44grc [956.511122ms] May 24 00:21:52.165: INFO: Created: latency-svc-fpwsk May 24 00:21:52.174: INFO: Got endpoints: latency-svc-fpwsk [931.894559ms] May 24 00:21:52.196: INFO: Created: latency-svc-bh9k2 May 24 00:21:52.205: INFO: Got endpoints: latency-svc-bh9k2 [902.481531ms] May 24 00:21:52.245: INFO: Created: latency-svc-z7nfn May 24 00:21:52.260: INFO: Got endpoints: latency-svc-z7nfn [924.535936ms] May 24 00:21:52.311: INFO: Created: latency-svc-jchmj May 24 00:21:52.332: INFO: Got endpoints: latency-svc-jchmj [917.805925ms] May 24 00:21:52.369: INFO: Created: latency-svc-r56vz May 24 00:21:52.386: INFO: Got endpoints: latency-svc-r56vz [846.355076ms] May 24 00:21:52.480: INFO: Created: latency-svc-qd25p May 24 00:21:52.500: INFO: Got endpoints: latency-svc-qd25p [936.066904ms] May 24 00:21:52.570: INFO: Created: latency-svc-mcs5b May 24 00:21:52.613: INFO: Created: latency-svc-gt22t May 24 00:21:52.615: INFO: Got endpoints: latency-svc-mcs5b [1.001807663s] May 24 00:21:52.641: INFO: Got endpoints: latency-svc-gt22t [944.734308ms] May 24 00:21:52.671: INFO: Created: latency-svc-kzvr6 May 24 00:21:52.688: INFO: Got endpoints: latency-svc-kzvr6 [953.360604ms] May 24 00:21:52.709: INFO: Created: latency-svc-cqckn May 24 00:21:52.738: INFO: Got endpoints: latency-svc-cqckn [961.92736ms] May 24 00:21:52.753: INFO: Created: latency-svc-7hfvw May 24 00:21:52.790: INFO: Got endpoints: latency-svc-7hfvw [953.719401ms] May 24 00:21:52.827: INFO: Created: latency-svc-67pkg May 24 00:21:52.858: INFO: Got endpoints: latency-svc-67pkg [948.60728ms] May 24 00:21:52.881: INFO: Created: latency-svc-8chk5 May 24 00:21:52.899: INFO: Got endpoints: latency-svc-8chk5 [936.091251ms] May 24 00:21:52.939: INFO: Created: latency-svc-jst2t May 24 00:21:53.019: INFO: Got endpoints: latency-svc-jst2t [1.013027959s] May 24 00:21:53.037: INFO: Created: latency-svc-5lbqb May 24 00:21:53.048: INFO: Got endpoints: latency-svc-5lbqb [885.545213ms] May 24 00:21:53.095: INFO: Created: latency-svc-rfgwp May 24 00:21:53.139: INFO: Got endpoints: latency-svc-rfgwp [964.2391ms] May 24 00:21:53.155: INFO: Created: latency-svc-rwsnf May 24 00:21:53.191: INFO: Got endpoints: latency-svc-rwsnf [986.307559ms] May 24 00:21:53.228: INFO: Created: latency-svc-5s5wb May 24 00:21:53.265: INFO: Got endpoints: latency-svc-5s5wb [1.00475378s] May 24 00:21:53.290: INFO: Created: latency-svc-k5mbw May 24 00:21:53.307: INFO: Got endpoints: latency-svc-k5mbw [975.625361ms] May 24 00:21:53.329: INFO: Created: latency-svc-xx8ml May 24 00:21:53.396: INFO: Got endpoints: latency-svc-xx8ml [1.00957421s] May 24 00:21:53.421: INFO: Created: latency-svc-jp86p May 24 00:21:53.443: INFO: Got endpoints: latency-svc-jp86p [942.376903ms] May 24 00:21:53.462: INFO: Created: latency-svc-shsrd May 24 00:21:53.478: INFO: Got endpoints: latency-svc-shsrd [862.212304ms] May 24 00:21:53.551: INFO: Created: latency-svc-fww8r May 24 00:21:53.570: INFO: Got endpoints: latency-svc-fww8r [929.576191ms] May 24 00:21:53.618: INFO: Created: latency-svc-rn655 May 24 00:21:53.633: INFO: Got endpoints: latency-svc-rn655 [945.703607ms] May 24 00:21:53.719: INFO: Created: latency-svc-j4v7x May 24 00:21:53.729: INFO: Got endpoints: latency-svc-j4v7x [991.062556ms] May 24 00:21:53.756: INFO: Created: latency-svc-t57fj May 24 00:21:53.772: INFO: Got endpoints: latency-svc-t57fj [982.171725ms] May 24 00:21:53.793: INFO: Created: latency-svc-w8qqn May 24 00:21:53.808: INFO: Got endpoints: latency-svc-w8qqn [950.4757ms] May 24 00:21:53.863: INFO: Created: latency-svc-rxm4t May 24 00:21:53.882: INFO: Got endpoints: latency-svc-rxm4t [982.478456ms] May 24 00:21:53.931: INFO: Created: latency-svc-2ldxn May 24 00:21:53.948: INFO: Got endpoints: latency-svc-2ldxn [928.825367ms] May 24 00:21:54.007: INFO: Created: latency-svc-x4m9g May 24 00:21:54.012: INFO: Got endpoints: latency-svc-x4m9g [964.094123ms] May 24 00:21:54.062: INFO: Created: latency-svc-txk48 May 24 00:21:54.074: INFO: Got endpoints: latency-svc-txk48 [935.781524ms] May 24 00:21:54.097: INFO: Created: latency-svc-nq7ll May 24 00:21:54.139: INFO: Got endpoints: latency-svc-nq7ll [947.282915ms] May 24 00:21:54.178: INFO: Created: latency-svc-64mqk May 24 00:21:54.214: INFO: Got endpoints: latency-svc-64mqk [949.105874ms] May 24 00:21:54.318: INFO: Created: latency-svc-l8v7l May 24 00:21:54.386: INFO: Created: latency-svc-976vm May 24 00:21:54.390: INFO: Got endpoints: latency-svc-l8v7l [1.082443848s] May 24 00:21:54.399: INFO: Got endpoints: latency-svc-976vm [1.003284047s] May 24 00:21:54.498: INFO: Created: latency-svc-42mgv May 24 00:21:54.555: INFO: Created: latency-svc-84748 May 24 00:21:54.555: INFO: Got endpoints: latency-svc-42mgv [1.112004843s] May 24 00:21:54.590: INFO: Got endpoints: latency-svc-84748 [1.111905726s] May 24 00:21:54.641: INFO: Created: latency-svc-zhczr May 24 00:21:54.660: INFO: Got endpoints: latency-svc-zhczr [1.090048938s] May 24 00:21:54.704: INFO: Created: latency-svc-6hv6g May 24 00:21:54.733: INFO: Got endpoints: latency-svc-6hv6g [1.099529643s] May 24 00:21:54.785: INFO: Created: latency-svc-d8s6b May 24 00:21:54.818: INFO: Got endpoints: latency-svc-d8s6b [1.088919624s] May 24 00:21:54.850: INFO: Created: latency-svc-q7gk7 May 24 00:21:54.879: INFO: Got endpoints: latency-svc-q7gk7 [1.106991497s] May 24 00:21:54.953: INFO: Created: latency-svc-dv8jb May 24 00:21:54.974: INFO: Got endpoints: latency-svc-dv8jb [1.165556012s] May 24 00:21:55.030: INFO: Created: latency-svc-j8m9q May 24 00:21:55.145: INFO: Got endpoints: latency-svc-j8m9q [1.263345962s] May 24 00:21:55.155: INFO: Created: latency-svc-v6lh2 May 24 00:21:55.186: INFO: Got endpoints: latency-svc-v6lh2 [1.238633607s] May 24 00:21:55.220: INFO: Created: latency-svc-7v99d May 24 00:21:55.239: INFO: Got endpoints: latency-svc-7v99d [1.226871763s] May 24 00:21:55.294: INFO: Created: latency-svc-qc5hw May 24 00:21:55.300: INFO: Got endpoints: latency-svc-qc5hw [1.225296501s] May 24 00:21:55.348: INFO: Created: latency-svc-dgqmc May 24 00:21:55.450: INFO: Got endpoints: latency-svc-dgqmc [1.311089857s] May 24 00:21:55.467: INFO: Created: latency-svc-nz46m May 24 00:21:55.486: INFO: Got endpoints: latency-svc-nz46m [1.272694071s] May 24 00:21:55.510: INFO: Created: latency-svc-5w7kz May 24 00:21:55.529: INFO: Got endpoints: latency-svc-5w7kz [1.138679208s] May 24 00:21:55.599: INFO: Created: latency-svc-jshq7 May 24 00:21:55.665: INFO: Got endpoints: latency-svc-jshq7 [1.265948725s] May 24 00:21:55.738: INFO: Created: latency-svc-fr5qt May 24 00:21:55.740: INFO: Got endpoints: latency-svc-fr5qt [1.185154417s] May 24 00:21:55.893: INFO: Created: latency-svc-7th56 May 24 00:21:55.920: INFO: Got endpoints: latency-svc-7th56 [1.329968505s] May 24 00:21:55.942: INFO: Created: latency-svc-4bmnz May 24 00:21:56.037: INFO: Got endpoints: latency-svc-4bmnz [1.376992346s] May 24 00:21:56.050: INFO: Created: latency-svc-ffjt5 May 24 00:21:56.109: INFO: Got endpoints: latency-svc-ffjt5 [1.375864119s] May 24 00:21:56.230: INFO: Created: latency-svc-jlsm7 May 24 00:21:56.250: INFO: Got endpoints: latency-svc-jlsm7 [1.43201311s] May 24 00:21:56.360: INFO: Created: latency-svc-wdftr May 24 00:21:56.364: INFO: Got endpoints: latency-svc-wdftr [1.484489919s] May 24 00:21:56.428: INFO: Created: latency-svc-thjq7 May 24 00:21:56.443: INFO: Got endpoints: latency-svc-thjq7 [1.468953462s] May 24 00:21:56.498: INFO: Created: latency-svc-hf2pt May 24 00:21:56.529: INFO: Got endpoints: latency-svc-hf2pt [1.383519431s] May 24 00:21:56.559: INFO: Created: latency-svc-lxrsd May 24 00:21:56.570: INFO: Got endpoints: latency-svc-lxrsd [1.383822921s] May 24 00:21:56.596: INFO: Created: latency-svc-lbpzn May 24 00:21:56.635: INFO: Got endpoints: latency-svc-lbpzn [1.395925059s] May 24 00:21:56.644: INFO: Created: latency-svc-mpw6v May 24 00:21:56.658: INFO: Got endpoints: latency-svc-mpw6v [1.358288782s] May 24 00:21:56.685: INFO: Created: latency-svc-vm7p2 May 24 00:21:56.707: INFO: Got endpoints: latency-svc-vm7p2 [1.256874202s] May 24 00:21:56.734: INFO: Created: latency-svc-tl78z May 24 00:21:56.809: INFO: Got endpoints: latency-svc-tl78z [1.322828556s] May 24 00:21:56.813: INFO: Created: latency-svc-fwdw9 May 24 00:21:56.828: INFO: Got endpoints: latency-svc-fwdw9 [1.298921215s] May 24 00:21:56.855: INFO: Created: latency-svc-q6ljc May 24 00:21:56.864: INFO: Got endpoints: latency-svc-q6ljc [1.198432548s] May 24 00:21:56.895: INFO: Created: latency-svc-zwbzx May 24 00:21:56.935: INFO: Got endpoints: latency-svc-zwbzx [1.194453066s] May 24 00:21:56.948: INFO: Created: latency-svc-zz8ph May 24 00:21:56.986: INFO: Got endpoints: latency-svc-zz8ph [1.065985823s] May 24 00:21:57.079: INFO: Created: latency-svc-x5sgn May 24 00:21:57.095: INFO: Got endpoints: latency-svc-x5sgn [1.057403483s] May 24 00:21:57.133: INFO: Created: latency-svc-fg6ws May 24 00:21:57.264: INFO: Got endpoints: latency-svc-fg6ws [1.155243644s] May 24 00:21:57.286: INFO: Created: latency-svc-hxbj6 May 24 00:21:57.303: INFO: Got endpoints: latency-svc-hxbj6 [1.053176614s] May 24 00:21:57.332: INFO: Created: latency-svc-bh9c9 May 24 00:21:57.352: INFO: Got endpoints: latency-svc-bh9c9 [988.284201ms] May 24 00:21:57.420: INFO: Created: latency-svc-k577x May 24 00:21:57.424: INFO: Got endpoints: latency-svc-k577x [981.344168ms] May 24 00:21:57.612: INFO: Created: latency-svc-wnqgj May 24 00:21:57.618: INFO: Got endpoints: latency-svc-wnqgj [1.089007172s] May 24 00:21:57.766: INFO: Created: latency-svc-95gnk May 24 00:21:57.773: INFO: Got endpoints: latency-svc-95gnk [1.202269723s] May 24 00:21:57.800: INFO: Created: latency-svc-kblcq May 24 00:21:57.809: INFO: Got endpoints: latency-svc-kblcq [1.174079751s] May 24 00:21:57.843: INFO: Created: latency-svc-sgjjs May 24 00:21:57.881: INFO: Got endpoints: latency-svc-sgjjs [1.222851986s] May 24 00:21:57.911: INFO: Created: latency-svc-s64bn May 24 00:21:57.926: INFO: Got endpoints: latency-svc-s64bn [1.21883991s] May 24 00:21:57.957: INFO: Created: latency-svc-6zt54 May 24 00:21:57.966: INFO: Got endpoints: latency-svc-6zt54 [1.156838178s] May 24 00:21:58.030: INFO: Created: latency-svc-rbswq May 24 00:21:58.034: INFO: Got endpoints: latency-svc-rbswq [1.206347946s] May 24 00:21:58.064: INFO: Created: latency-svc-xw4ll May 24 00:21:58.075: INFO: Got endpoints: latency-svc-xw4ll [1.211842883s] May 24 00:21:58.102: INFO: Created: latency-svc-b9hs4 May 24 00:21:58.112: INFO: Got endpoints: latency-svc-b9hs4 [1.177635205s] May 24 00:21:58.163: INFO: Created: latency-svc-ndv7m May 24 00:21:58.167: INFO: Got endpoints: latency-svc-ndv7m [1.181158532s] May 24 00:21:58.215: INFO: Created: latency-svc-qgwv2 May 24 00:21:58.302: INFO: Created: latency-svc-6pn7q May 24 00:21:58.302: INFO: Got endpoints: latency-svc-qgwv2 [1.207025317s] May 24 00:21:58.325: INFO: Got endpoints: latency-svc-6pn7q [1.060550851s] May 24 00:21:58.350: INFO: Created: latency-svc-4qqnr May 24 00:21:58.355: INFO: Got endpoints: latency-svc-4qqnr [1.051239078s] May 24 00:21:58.385: INFO: Created: latency-svc-5pdv9 May 24 00:21:58.432: INFO: Got endpoints: latency-svc-5pdv9 [1.079755253s] May 24 00:21:58.449: INFO: Created: latency-svc-b2gpg May 24 00:21:58.482: INFO: Got endpoints: latency-svc-b2gpg [1.057422753s] May 24 00:21:58.515: INFO: Created: latency-svc-7qxc9 May 24 00:21:58.525: INFO: Got endpoints: latency-svc-7qxc9 [907.139523ms] May 24 00:21:58.576: INFO: Created: latency-svc-rlddp May 24 00:21:58.600: INFO: Created: latency-svc-xhv82 May 24 00:21:58.600: INFO: Got endpoints: latency-svc-rlddp [827.826758ms] May 24 00:21:58.628: INFO: Got endpoints: latency-svc-xhv82 [818.900841ms] May 24 00:21:58.659: INFO: Created: latency-svc-ktdkn May 24 00:21:58.673: INFO: Got endpoints: latency-svc-ktdkn [792.179822ms] May 24 00:21:58.738: INFO: Created: latency-svc-rkjqd May 24 00:21:58.744: INFO: Got endpoints: latency-svc-rkjqd [818.481721ms] May 24 00:21:58.768: INFO: Created: latency-svc-fzplm May 24 00:21:58.821: INFO: Got endpoints: latency-svc-fzplm [854.666326ms] May 24 00:21:58.882: INFO: Created: latency-svc-qmk8f May 24 00:21:58.884: INFO: Got endpoints: latency-svc-qmk8f [850.189763ms] May 24 00:21:59.026: INFO: Created: latency-svc-m489k May 24 00:21:59.029: INFO: Got endpoints: latency-svc-m489k [953.637306ms] May 24 00:21:59.084: INFO: Created: latency-svc-ctfnt May 24 00:21:59.098: INFO: Got endpoints: latency-svc-ctfnt [985.49611ms] May 24 00:21:59.121: INFO: Created: latency-svc-wpbw6 May 24 00:21:59.175: INFO: Got endpoints: latency-svc-wpbw6 [1.00761168s] May 24 00:21:59.194: INFO: Created: latency-svc-xhgf6 May 24 00:21:59.212: INFO: Got endpoints: latency-svc-xhgf6 [910.438207ms] May 24 00:21:59.236: INFO: Created: latency-svc-f7kkf May 24 00:21:59.250: INFO: Got endpoints: latency-svc-f7kkf [925.207202ms] May 24 00:21:59.313: INFO: Created: latency-svc-8t4jx May 24 00:21:59.326: INFO: Got endpoints: latency-svc-8t4jx [971.556208ms] May 24 00:21:59.362: INFO: Created: latency-svc-jv2c2 May 24 00:21:59.382: INFO: Got endpoints: latency-svc-jv2c2 [949.797928ms] May 24 00:21:59.450: INFO: Created: latency-svc-bzpzx May 24 00:21:59.455: INFO: Got endpoints: latency-svc-bzpzx [972.858643ms] May 24 00:21:59.481: INFO: Created: latency-svc-td66q May 24 00:21:59.499: INFO: Got endpoints: latency-svc-td66q [973.758304ms] May 24 00:21:59.535: INFO: Created: latency-svc-jd79h May 24 00:21:59.582: INFO: Got endpoints: latency-svc-jd79h [981.226422ms] May 24 00:21:59.589: INFO: Created: latency-svc-x9s66 May 24 00:21:59.605: INFO: Got endpoints: latency-svc-x9s66 [976.827898ms] May 24 00:21:59.632: INFO: Created: latency-svc-q4w29 May 24 00:21:59.661: INFO: Got endpoints: latency-svc-q4w29 [988.2281ms] May 24 00:21:59.719: INFO: Created: latency-svc-2mkts May 24 00:21:59.724: INFO: Got endpoints: latency-svc-2mkts [979.478416ms] May 24 00:21:59.751: INFO: Created: latency-svc-6h78h May 24 00:21:59.767: INFO: Got endpoints: latency-svc-6h78h [946.107548ms] May 24 00:21:59.767: INFO: Latencies: [66.742744ms 200.298554ms 258.543473ms 299.282727ms 409.187857ms 439.289532ms 510.195328ms 558.505072ms 649.77314ms 711.466543ms 792.179822ms 818.481721ms 818.900841ms 824.140913ms 827.826758ms 842.392505ms 846.355076ms 850.189763ms 854.666326ms 862.212304ms 864.67405ms 877.165965ms 883.353184ms 885.545213ms 889.99173ms 895.170838ms 898.56304ms 902.481531ms 907.139523ms 910.438207ms 917.805925ms 924.535936ms 925.207202ms 928.825367ms 929.576191ms 931.894559ms 935.781524ms 936.066904ms 936.091251ms 942.376903ms 944.734308ms 945.703607ms 946.107548ms 947.282915ms 948.60728ms 949.105874ms 949.797928ms 950.4757ms 953.360604ms 953.637306ms 953.719401ms 956.511122ms 961.92736ms 964.094123ms 964.2391ms 971.556208ms 972.858643ms 973.758304ms 975.625361ms 976.545542ms 976.827898ms 979.478416ms 980.935558ms 981.226422ms 981.344168ms 982.021889ms 982.171725ms 982.478456ms 984.834783ms 985.49611ms 986.307559ms 988.2281ms 988.284201ms 991.062556ms 1.001807663s 1.003284047s 1.00475378s 1.006324415s 1.00761168s 1.00957421s 1.013027959s 1.051239078s 1.053176614s 1.057403483s 1.057422753s 1.060550851s 1.064692743s 1.065985823s 1.068866314s 1.079755253s 1.082443848s 1.088919624s 1.089007172s 1.090048938s 1.092803419s 1.093641672s 1.097323298s 1.099529643s 1.106991497s 1.107967675s 1.111905726s 1.112004843s 1.115264203s 1.11733923s 1.117824005s 1.119821957s 1.12282485s 1.128108943s 1.130654127s 1.131097118s 1.13217642s 1.132218859s 1.13434802s 1.138679208s 1.13917158s 1.144532866s 1.144694083s 1.144994208s 1.147984029s 1.150654524s 1.152079s 1.155243644s 1.155415796s 1.156838178s 1.157089848s 1.158091468s 1.158423935s 1.160039163s 1.162471315s 1.162490376s 1.163297315s 1.163708595s 1.165556012s 1.165995524s 1.16910804s 1.174079751s 1.175087928s 1.175179784s 1.177635205s 1.180958524s 1.181158532s 1.182459876s 1.183567373s 1.185154417s 1.192784732s 1.193068477s 1.193446366s 1.194289318s 1.194453066s 1.198432548s 1.199712276s 1.200447275s 1.202269723s 1.202336128s 1.204252325s 1.204946516s 1.206347946s 1.207025317s 1.211016208s 1.211842883s 1.212744552s 1.217935381s 1.21883991s 1.219767363s 1.220982486s 1.221339586s 1.222851986s 1.225296501s 1.226871763s 1.227221937s 1.228804122s 1.22893345s 1.233055995s 1.234574533s 1.23713972s 1.238633607s 1.247211372s 1.253873162s 1.254571122s 1.255417648s 1.256268602s 1.256874202s 1.263345962s 1.265948725s 1.272694071s 1.283932089s 1.284559474s 1.298921215s 1.311089857s 1.322828556s 1.329968505s 1.358288782s 1.375864119s 1.376992346s 1.383519431s 1.383822921s 1.395925059s 1.43201311s 1.468953462s 1.484489919s] May 24 00:21:59.767: INFO: 50 %ile: 1.111905726s May 24 00:21:59.767: INFO: 90 %ile: 1.256268602s May 24 00:21:59.767: INFO: 99 %ile: 1.468953462s May 24 00:21:59.767: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:21:59.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2296" for this suite. • [SLOW TEST:18.811 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":288,"completed":168,"skipped":2783,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:21:59.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:22:00.632: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:22:02.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876520, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876520, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876520, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876520, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:22:04.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876520, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876520, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876520, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876520, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:22:07.881: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:22:07.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-703-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:22:09.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4271" for this suite. STEP: Destroying namespace "webhook-4271-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.633 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":288,"completed":169,"skipped":2799,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:22:09.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2087.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2087.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2087.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2087.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2087.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2087.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 00:22:17.923: INFO: DNS probes using dns-2087/dns-test-ad600955-e795-4c70-84ab-47bb29d6f17b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:22:18.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2087" for this suite. • [SLOW TEST:8.945 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":288,"completed":170,"skipped":2811,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:22:18.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-079f0df2-db8b-485a-9f5d-6d5276f26ebb STEP: Creating a pod to test consume configMaps May 24 00:22:18.839: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cb810546-c6f9-46c8-8530-d7d6d1f9d84b" in namespace "projected-6645" to be "Succeeded or Failed" May 24 00:22:18.901: INFO: Pod "pod-projected-configmaps-cb810546-c6f9-46c8-8530-d7d6d1f9d84b": Phase="Pending", Reason="", readiness=false. Elapsed: 61.798247ms May 24 00:22:21.000: INFO: Pod "pod-projected-configmaps-cb810546-c6f9-46c8-8530-d7d6d1f9d84b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161109466s May 24 00:22:23.085: INFO: Pod "pod-projected-configmaps-cb810546-c6f9-46c8-8530-d7d6d1f9d84b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245777857s May 24 00:22:25.126: INFO: Pod "pod-projected-configmaps-cb810546-c6f9-46c8-8530-d7d6d1f9d84b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.287083173s STEP: Saw pod success May 24 00:22:25.126: INFO: Pod "pod-projected-configmaps-cb810546-c6f9-46c8-8530-d7d6d1f9d84b" satisfied condition "Succeeded or Failed" May 24 00:22:25.129: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-cb810546-c6f9-46c8-8530-d7d6d1f9d84b container projected-configmap-volume-test: STEP: delete the pod May 24 00:22:25.276: INFO: Waiting for pod pod-projected-configmaps-cb810546-c6f9-46c8-8530-d7d6d1f9d84b to disappear May 24 00:22:25.280: INFO: Pod pod-projected-configmaps-cb810546-c6f9-46c8-8530-d7d6d1f9d84b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:22:25.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6645" for this suite. • [SLOW TEST:6.987 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":171,"skipped":2816,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:22:25.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 24 00:22:25.517: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. May 24 00:22:26.194: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 24 00:22:29.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876546, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876546, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876546, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876546, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:22:31.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876546, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876546, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876546, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876546, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76d68c4777\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:22:33.797: INFO: Waited 655.347503ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:22:34.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7532" for this suite. • [SLOW TEST:9.460 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":288,"completed":172,"skipped":2854,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:22:34.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 24 00:22:39.731: INFO: Successfully updated pod "labelsupdatedd0fd649-600c-4bc5-ad3d-5865278c702b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:22:41.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1754" for this suite. • [SLOW TEST:7.095 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":288,"completed":173,"skipped":2881,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:22:41.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-349/configmap-test-c8882f59-f6aa-4b79-bfd2-75b5d0b2026d STEP: Creating a pod to test consume configMaps May 24 00:22:42.064: INFO: Waiting up to 5m0s for pod "pod-configmaps-c6f381d2-940a-445d-bc89-f94300754929" in namespace "configmap-349" to be "Succeeded or Failed" May 24 00:22:42.066: INFO: Pod "pod-configmaps-c6f381d2-940a-445d-bc89-f94300754929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.799551ms May 24 00:22:44.169: INFO: Pod "pod-configmaps-c6f381d2-940a-445d-bc89-f94300754929": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105347879s May 24 00:22:46.174: INFO: Pod "pod-configmaps-c6f381d2-940a-445d-bc89-f94300754929": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11010005s STEP: Saw pod success May 24 00:22:46.174: INFO: Pod "pod-configmaps-c6f381d2-940a-445d-bc89-f94300754929" satisfied condition "Succeeded or Failed" May 24 00:22:46.178: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c6f381d2-940a-445d-bc89-f94300754929 container env-test: STEP: delete the pod May 24 00:22:46.243: INFO: Waiting for pod pod-configmaps-c6f381d2-940a-445d-bc89-f94300754929 to disappear May 24 00:22:46.258: INFO: Pod pod-configmaps-c6f381d2-940a-445d-bc89-f94300754929 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:22:46.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-349" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":288,"completed":174,"skipped":2894,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:22:46.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:22:46.381: INFO: Waiting up to 5m0s for pod "busybox-user-65534-b0ffae1e-cd85-40de-b99c-dd15bcb7d2b0" in namespace "security-context-test-636" to be "Succeeded or Failed" May 24 00:22:46.399: INFO: Pod "busybox-user-65534-b0ffae1e-cd85-40de-b99c-dd15bcb7d2b0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.258396ms May 24 00:22:48.660: INFO: Pod "busybox-user-65534-b0ffae1e-cd85-40de-b99c-dd15bcb7d2b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278973171s May 24 00:22:50.664: INFO: Pod "busybox-user-65534-b0ffae1e-cd85-40de-b99c-dd15bcb7d2b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.282793007s May 24 00:22:50.664: INFO: Pod "busybox-user-65534-b0ffae1e-cd85-40de-b99c-dd15bcb7d2b0" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:22:50.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-636" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":175,"skipped":2897,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:22:50.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-c493c28b-95bf-43e8-a5a4-7d18fa8cd1ae STEP: Creating the pod STEP: Updating configmap configmap-test-upd-c493c28b-95bf-43e8-a5a4-7d18fa8cd1ae STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:24:21.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1952" for this suite. • [SLOW TEST:90.849 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":176,"skipped":2902,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:24:21.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 24 00:24:21.635: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e10fc25e-6e64-4c61-9d58-f0d529853c95" in namespace "downward-api-5927" to be "Succeeded or Failed" May 24 00:24:21.657: INFO: Pod "downwardapi-volume-e10fc25e-6e64-4c61-9d58-f0d529853c95": Phase="Pending", Reason="", readiness=false. Elapsed: 22.150779ms May 24 00:24:23.805: INFO: Pod "downwardapi-volume-e10fc25e-6e64-4c61-9d58-f0d529853c95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169570074s May 24 00:24:25.810: INFO: Pod "downwardapi-volume-e10fc25e-6e64-4c61-9d58-f0d529853c95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174859346s STEP: Saw pod success May 24 00:24:25.810: INFO: Pod "downwardapi-volume-e10fc25e-6e64-4c61-9d58-f0d529853c95" satisfied condition "Succeeded or Failed" May 24 00:24:25.814: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-e10fc25e-6e64-4c61-9d58-f0d529853c95 container client-container: STEP: delete the pod May 24 00:24:25.890: INFO: Waiting for pod downwardapi-volume-e10fc25e-6e64-4c61-9d58-f0d529853c95 to disappear May 24 00:24:25.899: INFO: Pod downwardapi-volume-e10fc25e-6e64-4c61-9d58-f0d529853c95 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:24:25.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5927" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":288,"completed":177,"skipped":2920,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:24:25.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 24 00:24:26.029: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b5a067e-449b-425a-94c7-21a2436f46fe" in namespace "downward-api-2760" to be "Succeeded or Failed" May 24 00:24:26.036: INFO: Pod "downwardapi-volume-7b5a067e-449b-425a-94c7-21a2436f46fe": Phase="Pending", Reason="", readiness=false. Elapsed: 7.658179ms May 24 00:24:28.041: INFO: Pod "downwardapi-volume-7b5a067e-449b-425a-94c7-21a2436f46fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012426254s May 24 00:24:30.046: INFO: Pod "downwardapi-volume-7b5a067e-449b-425a-94c7-21a2436f46fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017345548s STEP: Saw pod success May 24 00:24:30.046: INFO: Pod "downwardapi-volume-7b5a067e-449b-425a-94c7-21a2436f46fe" satisfied condition "Succeeded or Failed" May 24 00:24:30.049: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7b5a067e-449b-425a-94c7-21a2436f46fe container client-container: STEP: delete the pod May 24 00:24:30.106: INFO: Waiting for pod downwardapi-volume-7b5a067e-449b-425a-94c7-21a2436f46fe to disappear May 24 00:24:30.141: INFO: Pod downwardapi-volume-7b5a067e-449b-425a-94c7-21a2436f46fe no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:24:30.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2760" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":288,"completed":178,"skipped":2934,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:24:30.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-48af8b61-d02d-4b74-880b-49d12064be7e STEP: Creating a pod to test consume secrets May 24 00:24:30.520: INFO: Waiting up to 5m0s for pod "pod-secrets-a8f0380f-e54a-4104-b236-2918f9034942" in namespace "secrets-1430" to be "Succeeded or Failed" May 24 00:24:30.608: INFO: Pod "pod-secrets-a8f0380f-e54a-4104-b236-2918f9034942": Phase="Pending", Reason="", readiness=false. Elapsed: 87.964551ms May 24 00:24:32.645: INFO: Pod "pod-secrets-a8f0380f-e54a-4104-b236-2918f9034942": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124394178s May 24 00:24:34.648: INFO: Pod "pod-secrets-a8f0380f-e54a-4104-b236-2918f9034942": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127077061s STEP: Saw pod success May 24 00:24:34.648: INFO: Pod "pod-secrets-a8f0380f-e54a-4104-b236-2918f9034942" satisfied condition "Succeeded or Failed" May 24 00:24:34.650: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-a8f0380f-e54a-4104-b236-2918f9034942 container secret-volume-test: STEP: delete the pod May 24 00:24:34.692: INFO: Waiting for pod pod-secrets-a8f0380f-e54a-4104-b236-2918f9034942 to disappear May 24 00:24:34.725: INFO: Pod pod-secrets-a8f0380f-e54a-4104-b236-2918f9034942 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:24:34.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1430" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":288,"completed":179,"skipped":2934,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:24:34.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-42bbdee9-82d2-4dab-9aeb-041713d3b319 STEP: Creating secret with name s-test-opt-upd-2ae7d796-6ec9-472c-8835-15405cbd94c5 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-42bbdee9-82d2-4dab-9aeb-041713d3b319 STEP: Updating secret s-test-opt-upd-2ae7d796-6ec9-472c-8835-15405cbd94c5 STEP: Creating secret with name s-test-opt-create-36ae2eaf-a738-4eaf-94e4-b2c2d019668e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:24:43.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-893" for this suite. • [SLOW TEST:8.287 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":180,"skipped":3014,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:24:43.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:24:59.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2794" for this suite. • [SLOW TEST:16.623 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":288,"completed":181,"skipped":3031,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:24:59.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:24:59.760: INFO: Creating deployment "webserver-deployment" May 24 00:24:59.792: INFO: Waiting for observed generation 1 May 24 00:25:02.201: INFO: Waiting for all required pods to come up May 24 00:25:02.205: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 24 00:25:12.429: INFO: Waiting for deployment "webserver-deployment" to complete May 24 00:25:12.434: INFO: Updating deployment "webserver-deployment" with a non-existent image May 24 00:25:12.442: INFO: Updating deployment webserver-deployment May 24 00:25:12.442: INFO: Waiting for observed generation 2 May 24 00:25:14.461: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 24 00:25:14.464: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 24 00:25:14.466: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 24 00:25:14.475: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 24 00:25:14.475: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 24 00:25:14.477: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 24 00:25:14.482: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 24 00:25:14.482: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 24 00:25:14.487: INFO: Updating deployment webserver-deployment May 24 00:25:14.487: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 24 00:25:15.078: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 24 00:25:15.142: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 24 00:25:15.465: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6902 /apis/apps/v1/namespaces/deployment-6902/deployments/webserver-deployment e790e7d6-90c8-424f-8f72-1692b10fe8cc 7159152 3 2020-05-24 00:24:59 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-24 00:25:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00302fc38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-24 00:25:12 +0000 UTC,LastTransitionTime:2020-05-24 00:24:59 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-24 00:25:15 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 24 00:25:15.602: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-6902 /apis/apps/v1/namespaces/deployment-6902/replicasets/webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 7159198 3 2020-05-24 00:25:12 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment e790e7d6-90c8-424f-8f72-1692b10fe8cc 0xc002c1b3e7 0xc002c1b3e8}] [] [{kube-controller-manager Update apps/v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e790e7d6-90c8-424f-8f72-1692b10fe8cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c1b468 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 00:25:15.602: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 24 00:25:15.602: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-6902 /apis/apps/v1/namespaces/deployment-6902/replicasets/webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 7159196 3 2020-05-24 00:24:59 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment e790e7d6-90c8-424f-8f72-1692b10fe8cc 0xc002c1b4c7 0xc002c1b4c8}] [] [{kube-controller-manager Update apps/v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e790e7d6-90c8-424f-8f72-1692b10fe8cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002c1b538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 24 00:25:15.655: INFO: Pod "webserver-deployment-6676bcd6d4-bjl7r" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bjl7r webserver-deployment-6676bcd6d4- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-6676bcd6d4-bjl7r 27ed1635-6939-48d7-9411-2c161813c4b3 7159118 0 2020-05-24 00:25:12 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 0xc0036720b7 0xc0036720b8}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0610c8e-c2cb-457a-a3ec-ed410c7501fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-24 00:25:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.656: INFO: Pod "webserver-deployment-6676bcd6d4-dh47k" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dh47k webserver-deployment-6676bcd6d4- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-6676bcd6d4-dh47k 88635b10-06b5-4bf2-8b21-9e1f5102747c 7159125 0 2020-05-24 00:25:12 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 0xc003672277 0xc003672278}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0610c8e-c2cb-457a-a3ec-ed410c7501fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-24 00:25:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.656: INFO: Pod "webserver-deployment-6676bcd6d4-dqfkj" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-dqfkj webserver-deployment-6676bcd6d4- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-6676bcd6d4-dqfkj 25cecfee-c59c-49af-9d2a-48a98a6d02bf 7159165 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 0xc003672447 0xc003672448}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0610c8e-c2cb-457a-a3ec-ed410c7501fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.656: INFO: Pod "webserver-deployment-6676bcd6d4-f4krw" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-f4krw webserver-deployment-6676bcd6d4- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-6676bcd6d4-f4krw 1259bb12-4d55-4b52-ad30-417dfe4ad695 7159164 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 0xc003672597 0xc003672598}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0610c8e-c2cb-457a-a3ec-ed410c7501fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.656: INFO: Pod "webserver-deployment-6676bcd6d4-h4csm" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-h4csm webserver-deployment-6676bcd6d4- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-6676bcd6d4-h4csm 0385e3bc-5231-403e-a3c5-268dae801438 7159189 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 0xc0036726d7 0xc0036726d8}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0610c8e-c2cb-457a-a3ec-ed410c7501fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.657: INFO: Pod "webserver-deployment-6676bcd6d4-hxbd6" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hxbd6 webserver-deployment-6676bcd6d4- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-6676bcd6d4-hxbd6 37965996-7c0f-49f0-947c-e77bf253802b 7159187 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 0xc003672827 0xc003672828}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0610c8e-c2cb-457a-a3ec-ed410c7501fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.657: INFO: Pod "webserver-deployment-6676bcd6d4-pp965" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-pp965 webserver-deployment-6676bcd6d4- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-6676bcd6d4-pp965 a2ca70d2-55db-4b36-ae35-9547d51b568b 7159122 0 2020-05-24 00:25:12 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 0xc003672977 0xc003672978}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0610c8e-c2cb-457a-a3ec-ed410c7501fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-24 00:25:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.658: INFO: Pod "webserver-deployment-6676bcd6d4-qfdfg" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qfdfg webserver-deployment-6676bcd6d4- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-6676bcd6d4-qfdfg f524aeaf-3c24-4d8a-abb2-9825b154d166 7159185 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 0xc003672b37 0xc003672b38}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0610c8e-c2cb-457a-a3ec-ed410c7501fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.658: INFO: Pod "webserver-deployment-6676bcd6d4-qvxmd" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qvxmd webserver-deployment-6676bcd6d4- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-6676bcd6d4-qvxmd 6340111c-efe7-416c-9de9-19b0525cecf3 7159151 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 0xc003672ca7 0xc003672ca8}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0610c8e-c2cb-457a-a3ec-ed410c7501fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.658: INFO: Pod "webserver-deployment-6676bcd6d4-rgz72" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rgz72 webserver-deployment-6676bcd6d4- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-6676bcd6d4-rgz72 2abe7498-896e-4f58-9e30-c006eb0a2b09 7159108 0 2020-05-24 00:25:12 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 0xc003672e07 0xc003672e08}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0610c8e-c2cb-457a-a3ec-ed410c7501fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-24 00:25:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.659: INFO: Pod "webserver-deployment-6676bcd6d4-tftkb" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tftkb webserver-deployment-6676bcd6d4- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-6676bcd6d4-tftkb 45975062-cf33-4cd4-a05a-30c01e86edd3 7159197 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 0xc003672fd7 0xc003672fd8}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0610c8e-c2cb-457a-a3ec-ed410c7501fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.659: INFO: Pod "webserver-deployment-6676bcd6d4-xcnjj" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-xcnjj webserver-deployment-6676bcd6d4- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-6676bcd6d4-xcnjj 918be1d0-00ed-40bc-bc72-8cd158d87f92 7159102 0 2020-05-24 00:25:12 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 0xc003673117 0xc003673118}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0610c8e-c2cb-457a-a3ec-ed410c7501fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-24 00:25:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.659: INFO: Pod "webserver-deployment-6676bcd6d4-zvx9r" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-zvx9r webserver-deployment-6676bcd6d4- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-6676bcd6d4-zvx9r 8b242701-10d6-425c-b1a0-d1d6d5d48ebf 7159188 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 d0610c8e-c2cb-457a-a3ec-ed410c7501fc 0xc0036732c7 0xc0036732c8}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0610c8e-c2cb-457a-a3ec-ed410c7501fc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.660: INFO: Pod "webserver-deployment-84855cf797-22rdf" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-22rdf webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-22rdf f0467bcf-e171-4e83-a91d-fcdae880fcbb 7159192 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003673427 0xc003673428}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.660: INFO: Pod "webserver-deployment-84855cf797-6n8wx" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6n8wx webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-6n8wx 89e49011-088a-4364-8ec9-bb3bcc54f8a8 7159051 0 2020-05-24 00:24:59 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003673557 0xc003673558}] [] [{kube-controller-manager Update v1 2020-05-24 00:24:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.169\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:24:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.169,StartTime:2020-05-24 00:25:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 00:25:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1e73d725395562932224854cd706378f60cc8646a18ebb09b369e995148f811b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.169,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.660: INFO: Pod "webserver-deployment-84855cf797-6zlwc" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-6zlwc webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-6zlwc bb1b3a63-0824-4c03-be45-43ff1a88feff 7159212 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003673707 0xc003673708}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-05-24 00:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.660: INFO: Pod "webserver-deployment-84855cf797-75v9x" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-75v9x webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-75v9x dc205b24-35b3-47ec-b108-afec2c3913c7 7159171 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003673897 0xc003673898}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.661: INFO: Pod "webserver-deployment-84855cf797-7gp5x" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-7gp5x webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-7gp5x 4f1533e2-0921-49da-80d6-72e765443a0e 7159191 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc0036739c7 0xc0036739c8}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.661: INFO: Pod "webserver-deployment-84855cf797-9fc6k" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-9fc6k webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-9fc6k 106483a1-b21c-48ed-8e71-4eb18694c585 7159155 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003673b07 0xc003673b08}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.661: INFO: Pod "webserver-deployment-84855cf797-fchls" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fchls webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-fchls 34ecdc90-aaf3-4919-8908-15244220525a 7159047 0 2020-05-24 00:24:59 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003673c37 0xc003673c38}] [] [{kube-controller-manager Update v1 2020-05-24 00:24:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.171\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:24:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.171,StartTime:2020-05-24 00:25:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 00:25:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://349228142e7e279291eed50f10b70d6d51f772b5357f61db5040402077df511d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.171,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.661: INFO: Pod "webserver-deployment-84855cf797-ffffc" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ffffc webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-ffffc 75df1519-872f-4702-a201-d1bdd0449435 7159005 0 2020-05-24 00:24:59 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003673de7 0xc003673de8}] [] [{kube-controller-manager Update v1 2020-05-24 00:24:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.156\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:24:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:24:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.156,StartTime:2020-05-24 00:24:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 00:25:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://85b3c21af8789c467b224aa26ba7bda46951d725175d7b7eb2b71d7ff95aa949,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.156,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.661: INFO: Pod "webserver-deployment-84855cf797-fpsjs" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fpsjs webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-fpsjs e6fee61e-45d3-4865-b40e-009f77eda3bb 7159210 0 2020-05-24 00:25:14 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003673f97 0xc003673f98}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-24 00:25:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.662: INFO: Pod "webserver-deployment-84855cf797-fwv85" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fwv85 webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-fwv85 7d3611ba-a91d-4bca-a9cc-33a4d7bdd5c4 7159175 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003ed2147 0xc003ed2148}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.662: INFO: Pod "webserver-deployment-84855cf797-g7wkz" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-g7wkz webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-g7wkz 384c1a18-f183-4464-b601-be03cd32e99a 7159167 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003ed2277 0xc003ed2278}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.662: INFO: Pod "webserver-deployment-84855cf797-j975k" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-j975k webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-j975k d1146d65-ca77-410b-80df-c628c375d054 7159176 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003ed23a7 0xc003ed23a8}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.662: INFO: Pod "webserver-deployment-84855cf797-k2k45" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-k2k45 webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-k2k45 39172642-b93f-4a8c-ab99-dbcc666edab2 7159193 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003ed24d7 0xc003ed24d8}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.662: INFO: Pod "webserver-deployment-84855cf797-k6ggk" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-k6ggk webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-k6ggk c6e44397-12d7-4b33-ae9a-a2dc8e6daa34 7159065 0 2020-05-24 00:24:59 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003ed2607 0xc003ed2608}] [] [{kube-controller-manager Update v1 2020-05-24 00:24:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.160\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:24:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.160,StartTime:2020-05-24 00:25:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 00:25:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ae9a9f1a738690fba92a059918cb4c014f3f29c848a0584b85feef57c5bc5de0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.160,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.662: INFO: Pod "webserver-deployment-84855cf797-lcdtk" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-lcdtk webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-lcdtk a1f60690-6604-4b17-907c-861ce4bf6359 7159035 0 2020-05-24 00:24:59 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003ed27b7 0xc003ed27b8}] [] [{kube-controller-manager Update v1 2020-05-24 00:24:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.170\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:24:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.1.170,StartTime:2020-05-24 00:25:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 00:25:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2bd8984c67b5152f8f41ccd286e806a4449cd7520261103826e1a678c722c22d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.170,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.662: INFO: Pod "webserver-deployment-84855cf797-ml8lr" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-ml8lr webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-ml8lr 39b9af61-364b-4e77-a6a8-25099b8d5927 7159190 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003ed2967 0xc003ed2968}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.663: INFO: Pod "webserver-deployment-84855cf797-mql5s" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-mql5s webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-mql5s 86e5beef-9b55-4069-8025-4d1415dda07b 7159026 0 2020-05-24 00:24:59 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003ed2ab7 0xc003ed2ab8}] [] [{kube-controller-manager Update v1 2020-05-24 00:24:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.158\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:24:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.158,StartTime:2020-05-24 00:25:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 00:25:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3d445c12eff6087311efc30c7d2b08a312e682fd25d6590424e234a26637c98a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.158,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.663: INFO: Pod "webserver-deployment-84855cf797-trr78" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-trr78 webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-trr78 6b2df351-3c55-451e-bf65-aa8f1fbe06b1 7159042 0 2020-05-24 00:24:59 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003ed2c77 0xc003ed2c78}] [] [{kube-controller-manager Update v1 2020-05-24 00:24:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.159\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:24:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.159,StartTime:2020-05-24 00:25:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 00:25:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://707a089e6ced2effb9616b78b07e34b67c02b9c7b605f6e71b7c2a701d03ccf5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.159,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.663: INFO: Pod "webserver-deployment-84855cf797-wdn57" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-wdn57 webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-wdn57 53b91ef3-eb7f-4519-bf3b-7560da61a192 7159194 0 2020-05-24 00:25:15 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003ed2e87 0xc003ed2e88}] [] [{kube-controller-manager Update v1 2020-05-24 00:25:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 24 00:25:15.663: INFO: Pod "webserver-deployment-84855cf797-zx882" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-zx882 webserver-deployment-84855cf797- deployment-6902 /api/v1/namespaces/deployment-6902/pods/webserver-deployment-84855cf797-zx882 2ec4467d-dbba-4e96-8df4-4f336a3c0e0b 7159020 0 2020-05-24 00:24:59 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 b7a9b29e-40cc-4f87-92e7-daae5c68a408 0xc003ed2fb7 0xc003ed2fb8}] [] [{kube-controller-manager Update v1 2020-05-24 00:24:59 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b7a9b29e-40cc-4f87-92e7-daae5c68a408\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:25:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.157\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8t66x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8t66x,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8t66x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:25:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:24:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.157,StartTime:2020-05-24 00:25:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 00:25:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4aa3fe49b7a87fc380619f7a76544b6b588b1a9dfbcb73013dcc7a02d91677c2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:25:15.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6902" for this suite. • [SLOW TEST:16.171 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":288,"completed":182,"skipped":3077,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:25:15.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:25:32.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1737" for this suite. • [SLOW TEST:16.846 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":288,"completed":183,"skipped":3099,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:25:32.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-cks6 STEP: Creating a pod to test atomic-volume-subpath May 24 00:25:33.437: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-cks6" in namespace "subpath-14" to be "Succeeded or Failed" May 24 00:25:33.593: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Pending", Reason="", readiness=false. Elapsed: 155.271187ms May 24 00:25:35.831: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393636996s May 24 00:25:37.924: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.487029979s May 24 00:25:39.949: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Running", Reason="", readiness=true. Elapsed: 6.511897515s May 24 00:25:42.022: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Running", Reason="", readiness=true. Elapsed: 8.584952031s May 24 00:25:44.332: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Running", Reason="", readiness=true. Elapsed: 10.894913501s May 24 00:25:46.458: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Running", Reason="", readiness=true. Elapsed: 13.02096787s May 24 00:25:48.464: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Running", Reason="", readiness=true. Elapsed: 15.026584104s May 24 00:25:50.469: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Running", Reason="", readiness=true. Elapsed: 17.031621069s May 24 00:25:52.474: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Running", Reason="", readiness=true. Elapsed: 19.036364905s May 24 00:25:54.477: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Running", Reason="", readiness=true. Elapsed: 21.03984834s May 24 00:25:56.481: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Running", Reason="", readiness=true. Elapsed: 23.043812444s May 24 00:25:58.484: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Running", Reason="", readiness=true. Elapsed: 25.046702012s May 24 00:26:00.501: INFO: Pod "pod-subpath-test-downwardapi-cks6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.064204688s STEP: Saw pod success May 24 00:26:00.502: INFO: Pod "pod-subpath-test-downwardapi-cks6" satisfied condition "Succeeded or Failed" May 24 00:26:00.504: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-cks6 container test-container-subpath-downwardapi-cks6: STEP: delete the pod May 24 00:26:00.642: INFO: Waiting for pod pod-subpath-test-downwardapi-cks6 to disappear May 24 00:26:00.669: INFO: Pod pod-subpath-test-downwardapi-cks6 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-cks6 May 24 00:26:00.669: INFO: Deleting pod "pod-subpath-test-downwardapi-cks6" in namespace "subpath-14" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:26:00.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-14" for this suite. • [SLOW TEST:27.990 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":288,"completed":184,"skipped":3111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:26:00.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-542 STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 00:26:00.755: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 24 00:26:00.900: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 24 00:26:02.905: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 24 00:26:04.905: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 00:26:06.905: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 00:26:08.906: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 00:26:10.905: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 00:26:12.905: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 00:26:14.903: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 00:26:16.904: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 00:26:18.905: INFO: The status of Pod netserver-0 is Running (Ready = true) May 24 00:26:18.911: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 24 00:26:22.963: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.190:8080/dial?request=hostname&protocol=http&host=10.244.1.189&port=8080&tries=1'] Namespace:pod-network-test-542 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:26:22.963: INFO: >>> kubeConfig: /root/.kube/config I0524 00:26:22.992147 7 log.go:172] (0xc002b66000) (0xc001b361e0) Create stream I0524 00:26:22.992181 7 log.go:172] (0xc002b66000) (0xc001b361e0) Stream added, broadcasting: 1 I0524 00:26:22.993930 7 log.go:172] (0xc002b66000) Reply frame received for 1 I0524 00:26:22.993977 7 log.go:172] (0xc002b66000) (0xc001fb1040) Create stream I0524 00:26:22.994000 7 log.go:172] (0xc002b66000) (0xc001fb1040) Stream added, broadcasting: 3 I0524 00:26:22.994673 7 log.go:172] (0xc002b66000) Reply frame received for 3 I0524 00:26:22.994704 7 log.go:172] (0xc002b66000) (0xc001fb10e0) Create stream I0524 00:26:22.994715 7 log.go:172] (0xc002b66000) (0xc001fb10e0) Stream added, broadcasting: 5 I0524 00:26:22.995422 7 log.go:172] (0xc002b66000) Reply frame received for 5 I0524 00:26:23.066681 7 log.go:172] (0xc002b66000) Data frame received for 3 I0524 00:26:23.066716 7 log.go:172] (0xc001fb1040) (3) Data frame handling I0524 00:26:23.066739 7 log.go:172] (0xc001fb1040) (3) Data frame sent I0524 00:26:23.067159 7 log.go:172] (0xc002b66000) Data frame received for 3 I0524 00:26:23.067221 7 log.go:172] (0xc001fb1040) (3) Data frame handling I0524 00:26:23.067248 7 log.go:172] (0xc002b66000) Data frame received for 5 I0524 00:26:23.067261 7 log.go:172] (0xc001fb10e0) (5) Data frame handling I0524 00:26:23.068763 7 log.go:172] (0xc002b66000) Data frame received for 1 I0524 00:26:23.068801 7 log.go:172] (0xc001b361e0) (1) Data frame handling I0524 00:26:23.068832 7 log.go:172] (0xc001b361e0) (1) Data frame sent I0524 00:26:23.068852 7 log.go:172] (0xc002b66000) (0xc001b361e0) Stream removed, broadcasting: 1 I0524 00:26:23.068864 7 log.go:172] (0xc002b66000) Go away received I0524 00:26:23.068947 7 log.go:172] (0xc002b66000) (0xc001b361e0) Stream removed, broadcasting: 1 I0524 00:26:23.068962 7 log.go:172] (0xc002b66000) (0xc001fb1040) Stream removed, broadcasting: 3 I0524 00:26:23.068970 7 log.go:172] (0xc002b66000) (0xc001fb10e0) Stream removed, broadcasting: 5 May 24 00:26:23.069: INFO: Waiting for responses: map[] May 24 00:26:23.071: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.190:8080/dial?request=hostname&protocol=http&host=10.244.2.173&port=8080&tries=1'] Namespace:pod-network-test-542 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:26:23.071: INFO: >>> kubeConfig: /root/.kube/config I0524 00:26:23.098994 7 log.go:172] (0xc0035ea630) (0xc0023aca00) Create stream I0524 00:26:23.099026 7 log.go:172] (0xc0035ea630) (0xc0023aca00) Stream added, broadcasting: 1 I0524 00:26:23.100887 7 log.go:172] (0xc0035ea630) Reply frame received for 1 I0524 00:26:23.100921 7 log.go:172] (0xc0035ea630) (0xc0023acaa0) Create stream I0524 00:26:23.100932 7 log.go:172] (0xc0035ea630) (0xc0023acaa0) Stream added, broadcasting: 3 I0524 00:26:23.102252 7 log.go:172] (0xc0035ea630) Reply frame received for 3 I0524 00:26:23.102296 7 log.go:172] (0xc0035ea630) (0xc0017f3360) Create stream I0524 00:26:23.102309 7 log.go:172] (0xc0035ea630) (0xc0017f3360) Stream added, broadcasting: 5 I0524 00:26:23.103418 7 log.go:172] (0xc0035ea630) Reply frame received for 5 I0524 00:26:23.165065 7 log.go:172] (0xc0035ea630) Data frame received for 3 I0524 00:26:23.165088 7 log.go:172] (0xc0023acaa0) (3) Data frame handling I0524 00:26:23.165103 7 log.go:172] (0xc0023acaa0) (3) Data frame sent I0524 00:26:23.166023 7 log.go:172] (0xc0035ea630) Data frame received for 5 I0524 00:26:23.166054 7 log.go:172] (0xc0017f3360) (5) Data frame handling I0524 00:26:23.166076 7 log.go:172] (0xc0035ea630) Data frame received for 3 I0524 00:26:23.166110 7 log.go:172] (0xc0023acaa0) (3) Data frame handling I0524 00:26:23.167873 7 log.go:172] (0xc0035ea630) Data frame received for 1 I0524 00:26:23.167898 7 log.go:172] (0xc0023aca00) (1) Data frame handling I0524 00:26:23.167913 7 log.go:172] (0xc0023aca00) (1) Data frame sent I0524 00:26:23.167976 7 log.go:172] (0xc0035ea630) (0xc0023aca00) Stream removed, broadcasting: 1 I0524 00:26:23.168012 7 log.go:172] (0xc0035ea630) Go away received I0524 00:26:23.168272 7 log.go:172] (0xc0035ea630) (0xc0023aca00) Stream removed, broadcasting: 1 I0524 00:26:23.168324 7 log.go:172] (0xc0035ea630) (0xc0023acaa0) Stream removed, broadcasting: 3 I0524 00:26:23.168339 7 log.go:172] (0xc0035ea630) (0xc0017f3360) Stream removed, broadcasting: 5 May 24 00:26:23.168: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:26:23.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-542" for this suite. • [SLOW TEST:22.496 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":288,"completed":185,"skipped":3144,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:26:23.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 24 00:26:23.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4621' May 24 00:26:26.351: INFO: stderr: "" May 24 00:26:26.351: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 24 00:26:27.355: INFO: Selector matched 1 pods for map[app:agnhost] May 24 00:26:27.356: INFO: Found 0 / 1 May 24 00:26:28.371: INFO: Selector matched 1 pods for map[app:agnhost] May 24 00:26:28.371: INFO: Found 0 / 1 May 24 00:26:29.573: INFO: Selector matched 1 pods for map[app:agnhost] May 24 00:26:29.573: INFO: Found 0 / 1 May 24 00:26:30.581: INFO: Selector matched 1 pods for map[app:agnhost] May 24 00:26:30.581: INFO: Found 1 / 1 May 24 00:26:30.581: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 24 00:26:30.584: INFO: Selector matched 1 pods for map[app:agnhost] May 24 00:26:30.584: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 24 00:26:30.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config patch pod agnhost-master-xwfph --namespace=kubectl-4621 -p {"metadata":{"annotations":{"x":"y"}}}' May 24 00:26:30.734: INFO: stderr: "" May 24 00:26:30.734: INFO: stdout: "pod/agnhost-master-xwfph patched\n" STEP: checking annotations May 24 00:26:30.758: INFO: Selector matched 1 pods for map[app:agnhost] May 24 00:26:30.758: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:26:30.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4621" for this suite. • [SLOW TEST:7.590 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1468 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":288,"completed":186,"skipped":3156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:26:30.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 24 00:26:31.185: INFO: Waiting up to 5m0s for pod "pod-04a2154f-14da-4a3a-8308-f1a8950b7d8d" in namespace "emptydir-5918" to be "Succeeded or Failed" May 24 00:26:31.368: INFO: Pod "pod-04a2154f-14da-4a3a-8308-f1a8950b7d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 183.067309ms May 24 00:26:33.452: INFO: Pod "pod-04a2154f-14da-4a3a-8308-f1a8950b7d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.266903782s May 24 00:26:35.456: INFO: Pod "pod-04a2154f-14da-4a3a-8308-f1a8950b7d8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.271364309s STEP: Saw pod success May 24 00:26:35.456: INFO: Pod "pod-04a2154f-14da-4a3a-8308-f1a8950b7d8d" satisfied condition "Succeeded or Failed" May 24 00:26:35.459: INFO: Trying to get logs from node latest-worker pod pod-04a2154f-14da-4a3a-8308-f1a8950b7d8d container test-container: STEP: delete the pod May 24 00:26:35.560: INFO: Waiting for pod pod-04a2154f-14da-4a3a-8308-f1a8950b7d8d to disappear May 24 00:26:35.564: INFO: Pod pod-04a2154f-14da-4a3a-8308-f1a8950b7d8d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:26:35.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5918" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":187,"skipped":3234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:26:35.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod May 24 00:26:35.693: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:26:41.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5371" for this suite. • [SLOW TEST:6.299 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":288,"completed":188,"skipped":3265,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:26:41.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:26:41.966: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 24 00:26:46.969: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 24 00:26:46.969: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 24 00:26:48.974: INFO: Creating deployment "test-rollover-deployment" May 24 00:26:48.984: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 24 00:26:50.989: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 24 00:26:50.996: INFO: Ensure that both replica sets have 1 created replica May 24 00:26:51.003: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 24 00:26:51.010: INFO: Updating deployment test-rollover-deployment May 24 00:26:51.010: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 24 00:26:53.099: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 24 00:26:53.106: INFO: Make sure deployment "test-rollover-deployment" is complete May 24 00:26:53.112: INFO: all replica sets need to contain the pod-template-hash label May 24 00:26:53.112: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876811, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:26:55.121: INFO: all replica sets need to contain the pod-template-hash label May 24 00:26:55.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876813, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:26:57.120: INFO: all replica sets need to contain the pod-template-hash label May 24 00:26:57.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876813, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:26:59.119: INFO: all replica sets need to contain the pod-template-hash label May 24 00:26:59.119: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876813, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:27:01.133: INFO: all replica sets need to contain the pod-template-hash label May 24 00:27:01.133: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876813, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:27:03.120: INFO: all replica sets need to contain the pod-template-hash label May 24 00:27:03.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876813, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876809, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7c4fd9c879\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:27:05.121: INFO: May 24 00:27:05.121: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 24 00:27:05.130: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9256 /apis/apps/v1/namespaces/deployment-9256/deployments/test-rollover-deployment af98b776-e1ec-4f1c-abc8-b3f708c821f7 7160098 2 2020-05-24 00:26:48 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-24 00:26:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-24 00:27:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00584e378 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-24 00:26:49 +0000 UTC,LastTransitionTime:2020-05-24 00:26:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7c4fd9c879" has successfully progressed.,LastUpdateTime:2020-05-24 00:27:03 +0000 UTC,LastTransitionTime:2020-05-24 00:26:49 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 24 00:27:05.134: INFO: New ReplicaSet "test-rollover-deployment-7c4fd9c879" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-7c4fd9c879 deployment-9256 /apis/apps/v1/namespaces/deployment-9256/replicasets/test-rollover-deployment-7c4fd9c879 397367a7-256a-44fa-81ad-0e5f28a82dd2 7160087 2 2020-05-24 00:26:51 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment af98b776-e1ec-4f1c-abc8-b3f708c821f7 0xc0038138a7 0xc0038138a8}] [] [{kube-controller-manager Update apps/v1 2020-05-24 00:27:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af98b776-e1ec-4f1c-abc8-b3f708c821f7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7c4fd9c879,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003813938 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 24 00:27:05.134: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 24 00:27:05.134: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9256 /apis/apps/v1/namespaces/deployment-9256/replicasets/test-rollover-controller 99cedbb9-0357-48b5-8c35-45fc6f783cbf 7160097 2 2020-05-24 00:26:41 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment af98b776-e1ec-4f1c-abc8-b3f708c821f7 0xc00381368f 0xc0038136a0}] [] [{e2e.test Update apps/v1 2020-05-24 00:26:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-24 00:27:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af98b776-e1ec-4f1c-abc8-b3f708c821f7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003813738 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 00:27:05.134: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-9256 /apis/apps/v1/namespaces/deployment-9256/replicasets/test-rollover-deployment-5686c4cfd5 29bbbf94-c8bf-4837-b900-f079a32922e1 7160036 2 2020-05-24 00:26:48 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment af98b776-e1ec-4f1c-abc8-b3f708c821f7 0xc0038137a7 0xc0038137a8}] [] [{kube-controller-manager Update apps/v1 2020-05-24 00:26:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"af98b776-e1ec-4f1c-abc8-b3f708c821f7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003813838 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 00:27:05.137: INFO: Pod "test-rollover-deployment-7c4fd9c879-gcm72" is available: &Pod{ObjectMeta:{test-rollover-deployment-7c4fd9c879-gcm72 test-rollover-deployment-7c4fd9c879- deployment-9256 /api/v1/namespaces/deployment-9256/pods/test-rollover-deployment-7c4fd9c879-gcm72 c1000d02-f50c-4fa7-bd83-bca3d1b98a1e 7160052 0 2020-05-24 00:26:51 +0000 UTC map[name:rollover-pod pod-template-hash:7c4fd9c879] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7c4fd9c879 397367a7-256a-44fa-81ad-0e5f28a82dd2 0xc003d31297 0xc003d31298}] [] [{kube-controller-manager Update v1 2020-05-24 00:26:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"397367a7-256a-44fa-81ad-0e5f28a82dd2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:26:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.176\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kqrjr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kqrjr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kqrjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:26:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:26:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:26:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:26:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.176,StartTime:2020-05-24 00:26:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 00:26:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://009535bca656eb62425257d1215de1e4d2260afbd407a3d79a81668a61666874,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.176,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:27:05.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9256" for this suite. • [SLOW TEST:23.272 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":288,"completed":189,"skipped":3272,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:27:05.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:27:05.402: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"78c1b33d-18db-4f49-abb7-7a7404c94944", Controller:(*bool)(0xc00584efea), BlockOwnerDeletion:(*bool)(0xc00584efeb)}} May 24 00:27:05.473: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"5634584e-3dd2-4ac7-bfd9-ff58cc8aab7b", Controller:(*bool)(0xc002ba950a), BlockOwnerDeletion:(*bool)(0xc002ba950b)}} May 24 00:27:05.511: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"abca4569-1442-4cf0-8180-fefaead3a07a", Controller:(*bool)(0xc002d0a582), BlockOwnerDeletion:(*bool)(0xc002d0a583)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:27:10.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9693" for this suite. • [SLOW TEST:5.455 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":288,"completed":190,"skipped":3285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:27:10.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:27:11.058: INFO: Create a RollingUpdate DaemonSet May 24 00:27:11.069: INFO: Check that daemon pods launch on every node of the cluster May 24 00:27:11.125: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:27:11.165: INFO: Number of nodes with available pods: 0 May 24 00:27:11.165: INFO: Node latest-worker is running more than one daemon pod May 24 00:27:12.170: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:27:12.173: INFO: Number of nodes with available pods: 0 May 24 00:27:12.173: INFO: Node latest-worker is running more than one daemon pod May 24 00:27:13.208: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:27:13.212: INFO: Number of nodes with available pods: 0 May 24 00:27:13.212: INFO: Node latest-worker is running more than one daemon pod May 24 00:27:14.286: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:27:14.289: INFO: Number of nodes with available pods: 0 May 24 00:27:14.289: INFO: Node latest-worker is running more than one daemon pod May 24 00:27:15.171: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:27:15.175: INFO: Number of nodes with available pods: 1 May 24 00:27:15.175: INFO: Node latest-worker2 is running more than one daemon pod May 24 00:27:16.190: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:27:16.194: INFO: Number of nodes with available pods: 2 May 24 00:27:16.194: INFO: Number of running nodes: 2, number of available pods: 2 May 24 00:27:16.194: INFO: Update the DaemonSet to trigger a rollout May 24 00:27:16.202: INFO: Updating DaemonSet daemon-set May 24 00:27:20.256: INFO: Roll back the DaemonSet before rollout is complete May 24 00:27:20.264: INFO: Updating DaemonSet daemon-set May 24 00:27:20.264: INFO: Make sure DaemonSet rollback is complete May 24 00:27:20.287: INFO: Wrong image for pod: daemon-set-vl477. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 24 00:27:20.287: INFO: Pod daemon-set-vl477 is not available May 24 00:27:20.305: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:27:21.309: INFO: Wrong image for pod: daemon-set-vl477. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 24 00:27:21.309: INFO: Pod daemon-set-vl477 is not available May 24 00:27:21.313: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 24 00:27:22.309: INFO: Pod daemon-set-vczf6 is not available May 24 00:27:22.314: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8398, will wait for the garbage collector to delete the pods May 24 00:27:22.383: INFO: Deleting DaemonSet.extensions daemon-set took: 7.021917ms May 24 00:27:22.484: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.280385ms May 24 00:27:25.287: INFO: Number of nodes with available pods: 0 May 24 00:27:25.287: INFO: Number of running nodes: 0, number of available pods: 0 May 24 00:27:25.290: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8398/daemonsets","resourceVersion":"7160310"},"items":null} May 24 00:27:25.293: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8398/pods","resourceVersion":"7160310"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:27:25.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8398" for this suite. • [SLOW TEST:14.743 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":288,"completed":191,"skipped":3308,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:27:25.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:27:26.170: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:27:28.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876846, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876846, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876846, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876846, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:27:31.231: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:27:31.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-779" for this suite. STEP: Destroying namespace "webhook-779-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.229 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":288,"completed":192,"skipped":3313,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:27:31.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-7d3db330-530e-4c13-ab6d-3b58f8363365 STEP: Creating a pod to test consume configMaps May 24 00:27:31.730: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a9b6fbf-2e64-486c-996d-a6691f7e06bf" in namespace "configmap-2159" to be "Succeeded or Failed" May 24 00:27:31.771: INFO: Pod "pod-configmaps-9a9b6fbf-2e64-486c-996d-a6691f7e06bf": Phase="Pending", Reason="", readiness=false. Elapsed: 41.505764ms May 24 00:27:33.775: INFO: Pod "pod-configmaps-9a9b6fbf-2e64-486c-996d-a6691f7e06bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045169319s May 24 00:27:35.779: INFO: Pod "pod-configmaps-9a9b6fbf-2e64-486c-996d-a6691f7e06bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049081971s STEP: Saw pod success May 24 00:27:35.779: INFO: Pod "pod-configmaps-9a9b6fbf-2e64-486c-996d-a6691f7e06bf" satisfied condition "Succeeded or Failed" May 24 00:27:35.782: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-9a9b6fbf-2e64-486c-996d-a6691f7e06bf container configmap-volume-test: STEP: delete the pod May 24 00:27:35.828: INFO: Waiting for pod pod-configmaps-9a9b6fbf-2e64-486c-996d-a6691f7e06bf to disappear May 24 00:27:35.843: INFO: Pod pod-configmaps-9a9b6fbf-2e64-486c-996d-a6691f7e06bf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:27:35.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2159" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":288,"completed":193,"skipped":3319,"failed":0} S ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:27:35.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:27:35.997: INFO: Creating deployment "test-recreate-deployment" May 24 00:27:36.039: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 24 00:27:36.050: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 24 00:27:38.057: INFO: Waiting deployment "test-recreate-deployment" to complete May 24 00:27:38.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876856, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876856, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876856, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876856, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6d65b9f6d8\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:27:40.070: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 24 00:27:40.078: INFO: Updating deployment test-recreate-deployment May 24 00:27:40.078: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 May 24 00:27:40.836: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4458 /apis/apps/v1/namespaces/deployment-4458/deployments/test-recreate-deployment 22532669-deb8-42eb-87fd-0b4be59546af 7160511 2 2020-05-24 00:27:35 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-24 00:27:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-05-24 00:27:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003859f88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-24 00:27:40 +0000 UTC,LastTransitionTime:2020-05-24 00:27:40 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-24 00:27:40 +0000 UTC,LastTransitionTime:2020-05-24 00:27:36 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 24 00:27:40.840: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-4458 /apis/apps/v1/namespaces/deployment-4458/replicasets/test-recreate-deployment-d5667d9c7 53334349-ce07-4bd1-9ddd-93c6146ca2aa 7160508 1 2020-05-24 00:27:40 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 22532669-deb8-42eb-87fd-0b4be59546af 0xc003302480 0xc003302481}] [] [{kube-controller-manager Update apps/v1 2020-05-24 00:27:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"22532669-deb8-42eb-87fd-0b4be59546af\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033024f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 00:27:40.840: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 24 00:27:40.840: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6d65b9f6d8 deployment-4458 /apis/apps/v1/namespaces/deployment-4458/replicasets/test-recreate-deployment-6d65b9f6d8 b2732966-04cf-4e1f-941e-35c3ccf6784c 7160499 2 2020-05-24 00:27:36 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 22532669-deb8-42eb-87fd-0b4be59546af 0xc003302387 0xc003302388}] [] [{kube-controller-manager Update apps/v1 2020-05-24 00:27:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"22532669-deb8-42eb-87fd-0b4be59546af\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6d65b9f6d8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6d65b9f6d8] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003302418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 24 00:27:40.909: INFO: Pod "test-recreate-deployment-d5667d9c7-555p6" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-555p6 test-recreate-deployment-d5667d9c7- deployment-4458 /api/v1/namespaces/deployment-4458/pods/test-recreate-deployment-d5667d9c7-555p6 6a13469d-4e1a-4ea2-a46a-fcebb38c4dbe 7160513 0 2020-05-24 00:27:40 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 53334349-ce07-4bd1-9ddd-93c6146ca2aa 0xc002f850e0 0xc002f850e1}] [] [{kube-controller-manager Update v1 2020-05-24 00:27:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53334349-ce07-4bd1-9ddd-93c6146ca2aa\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:27:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-km9xz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-km9xz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-km9xz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:27:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:27:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:27:40 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:27:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-05-24 00:27:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:27:40.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4458" for this suite. • [SLOW TEST:5.214 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":288,"completed":194,"skipped":3320,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:27:41.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 24 00:27:42.795: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 24 00:27:44.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876862, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876862, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876863, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876862, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:27:48.010: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:27:48.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:27:49.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8619" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:8.307 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":288,"completed":195,"skipped":3328,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:27:49.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-8ea6adb0-1cbc-4225-b089-0b96baff7715 STEP: Creating a pod to test consume configMaps May 24 00:27:49.575: INFO: Waiting up to 5m0s for pod "pod-configmaps-7917b133-7bac-4b66-b128-664a4e8fd62b" in namespace "configmap-1416" to be "Succeeded or Failed" May 24 00:27:49.616: INFO: Pod "pod-configmaps-7917b133-7bac-4b66-b128-664a4e8fd62b": Phase="Pending", Reason="", readiness=false. Elapsed: 41.113934ms May 24 00:27:51.632: INFO: Pod "pod-configmaps-7917b133-7bac-4b66-b128-664a4e8fd62b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057210458s May 24 00:27:53.637: INFO: Pod "pod-configmaps-7917b133-7bac-4b66-b128-664a4e8fd62b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061539397s STEP: Saw pod success May 24 00:27:53.637: INFO: Pod "pod-configmaps-7917b133-7bac-4b66-b128-664a4e8fd62b" satisfied condition "Succeeded or Failed" May 24 00:27:53.640: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-7917b133-7bac-4b66-b128-664a4e8fd62b container configmap-volume-test: STEP: delete the pod May 24 00:27:53.675: INFO: Waiting for pod pod-configmaps-7917b133-7bac-4b66-b128-664a4e8fd62b to disappear May 24 00:27:53.689: INFO: Pod pod-configmaps-7917b133-7bac-4b66-b128-664a4e8fd62b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:27:53.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1416" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":196,"skipped":3339,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:27:53.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args May 24 00:27:53.788: INFO: Waiting up to 5m0s for pod "var-expansion-e4ef3627-074e-4d1b-b4a6-cc2c9cc601da" in namespace "var-expansion-2165" to be "Succeeded or Failed" May 24 00:27:53.791: INFO: Pod "var-expansion-e4ef3627-074e-4d1b-b4a6-cc2c9cc601da": Phase="Pending", Reason="", readiness=false. Elapsed: 3.174163ms May 24 00:27:55.914: INFO: Pod "var-expansion-e4ef3627-074e-4d1b-b4a6-cc2c9cc601da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126555357s May 24 00:27:57.942: INFO: Pod "var-expansion-e4ef3627-074e-4d1b-b4a6-cc2c9cc601da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154003218s STEP: Saw pod success May 24 00:27:57.942: INFO: Pod "var-expansion-e4ef3627-074e-4d1b-b4a6-cc2c9cc601da" satisfied condition "Succeeded or Failed" May 24 00:27:57.945: INFO: Trying to get logs from node latest-worker2 pod var-expansion-e4ef3627-074e-4d1b-b4a6-cc2c9cc601da container dapi-container: STEP: delete the pod May 24 00:27:57.967: INFO: Waiting for pod var-expansion-e4ef3627-074e-4d1b-b4a6-cc2c9cc601da to disappear May 24 00:27:57.991: INFO: Pod var-expansion-e4ef3627-074e-4d1b-b4a6-cc2c9cc601da no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:27:57.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2165" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":288,"completed":197,"skipped":3347,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:27:58.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:27:59.153: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:28:01.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876879, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876879, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876879, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876879, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:28:03.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876879, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876879, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876879, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876879, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:28:06.214: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:28:18.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2913" for this suite. STEP: Destroying namespace "webhook-2913-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.486 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":288,"completed":198,"skipped":3351,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:28:18.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:28:19.552: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:28:21.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876899, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876899, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876899, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876899, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} May 24 00:28:23.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876899, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876899, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876899, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725876899, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:28:26.602: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:28:36.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-41" for this suite. STEP: Destroying namespace "webhook-41-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.505 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":288,"completed":199,"skipped":3368,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:28:37.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:28:48.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7130" for this suite. • [SLOW TEST:11.218 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":288,"completed":200,"skipped":3440,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:28:48.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:29:04.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-891" for this suite. • [SLOW TEST:16.246 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":288,"completed":201,"skipped":3453,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:29:04.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:80 May 24 00:29:04.601: INFO: Waiting up to 1m0s for all nodes to be ready May 24 00:30:04.622: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:30:04.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:467 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. May 24 00:30:08.782: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:30:24.979: INFO: pods created so far: [1 1 1] May 24 00:30:24.979: INFO: length of pods created so far: 3 May 24 00:30:38.988: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:30:45.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-4419" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:439 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:30:46.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-4912" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:74 • [SLOW TEST:101.671 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:428 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":288,"completed":202,"skipped":3465,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:30:46.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 24 00:30:46.309: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 00:30:46.319: INFO: Waiting for terminating namespaces to be deleted... May 24 00:30:46.322: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 24 00:30:46.327: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 24 00:30:46.327: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 24 00:30:46.327: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 24 00:30:46.327: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 24 00:30:46.327: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 24 00:30:46.327: INFO: Container kindnet-cni ready: true, restart count 0 May 24 00:30:46.327: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 24 00:30:46.327: INFO: Container kube-proxy ready: true, restart count 0 May 24 00:30:46.327: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 24 00:30:46.333: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 24 00:30:46.333: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 24 00:30:46.333: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 24 00:30:46.333: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 24 00:30:46.333: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 24 00:30:46.333: INFO: Container kindnet-cni ready: true, restart count 0 May 24 00:30:46.333: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 24 00:30:46.333: INFO: Container kube-proxy ready: true, restart count 0 May 24 00:30:46.333: INFO: pod4 from sched-preemption-path-4419 started at 2020-05-24 00:30:37 +0000 UTC (1 container statuses recorded) May 24 00:30:46.333: INFO: Container pod4 ready: true, restart count 0 May 24 00:30:46.333: INFO: rs-pod3-6j9wd from sched-preemption-path-4419 started at 2020-05-24 00:30:21 +0000 UTC (1 container statuses recorded) May 24 00:30:46.333: INFO: Container pod3 ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-f8617470-54c0-46d2-af6f-1e45ef297470 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-f8617470-54c0-46d2-af6f-1e45ef297470 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-f8617470-54c0-46d2-af6f-1e45ef297470 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:30:54.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7290" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.427 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":288,"completed":203,"skipped":3478,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:30:54.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 24 00:30:54.685: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 00:30:54.696: INFO: Waiting for terminating namespaces to be deleted... May 24 00:30:54.698: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 24 00:30:54.702: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 24 00:30:54.702: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 24 00:30:54.702: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 24 00:30:54.702: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 24 00:30:54.702: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 24 00:30:54.702: INFO: Container kindnet-cni ready: true, restart count 0 May 24 00:30:54.702: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 24 00:30:54.702: INFO: Container kube-proxy ready: true, restart count 0 May 24 00:30:54.702: INFO: with-labels from sched-pred-7290 started at 2020-05-24 00:30:50 +0000 UTC (1 container statuses recorded) May 24 00:30:54.702: INFO: Container with-labels ready: true, restart count 0 May 24 00:30:54.702: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 24 00:30:54.706: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 24 00:30:54.706: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 24 00:30:54.706: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 24 00:30:54.706: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 24 00:30:54.706: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 24 00:30:54.706: INFO: Container kindnet-cni ready: true, restart count 0 May 24 00:30:54.706: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 24 00:30:54.706: INFO: Container kube-proxy ready: true, restart count 0 May 24 00:30:54.706: INFO: pod4 from sched-preemption-path-4419 started at 2020-05-24 00:30:37 +0000 UTC (1 container statuses recorded) May 24 00:30:54.706: INFO: Container pod4 ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-ec38c6f6-5fcf-4e75-8de8-f9d263b8c7d1 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-ec38c6f6-5fcf-4e75-8de8-f9d263b8c7d1 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-ec38c6f6-5fcf-4e75-8de8-f9d263b8c7d1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:36:03.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9725" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.470 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":288,"completed":204,"skipped":3483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:36:03.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 24 00:36:07.203: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:36:07.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-761" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":288,"completed":205,"skipped":3532,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:36:07.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:36:07.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4907" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":288,"completed":206,"skipped":3537,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:36:07.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 24 00:36:15.757: INFO: 0 pods remaining May 24 00:36:15.757: INFO: 0 pods has nil DeletionTimestamp May 24 00:36:15.757: INFO: May 24 00:36:17.213: INFO: 0 pods remaining May 24 00:36:17.213: INFO: 0 pods has nil DeletionTimestamp May 24 00:36:17.213: INFO: STEP: Gathering metrics W0524 00:36:17.926182 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 00:36:17.926: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:36:17.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3139" for this suite. • [SLOW TEST:11.078 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":288,"completed":207,"skipped":3542,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:36:18.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition May 24 00:36:19.197: INFO: Waiting up to 5m0s for pod "var-expansion-115d978b-f474-4813-af44-bdc11189ecb3" in namespace "var-expansion-669" to be "Succeeded or Failed" May 24 00:36:19.267: INFO: Pod "var-expansion-115d978b-f474-4813-af44-bdc11189ecb3": Phase="Pending", Reason="", readiness=false. Elapsed: 69.959955ms May 24 00:36:21.325: INFO: Pod "var-expansion-115d978b-f474-4813-af44-bdc11189ecb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128035419s May 24 00:36:23.329: INFO: Pod "var-expansion-115d978b-f474-4813-af44-bdc11189ecb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132383234s STEP: Saw pod success May 24 00:36:23.330: INFO: Pod "var-expansion-115d978b-f474-4813-af44-bdc11189ecb3" satisfied condition "Succeeded or Failed" May 24 00:36:23.375: INFO: Trying to get logs from node latest-worker2 pod var-expansion-115d978b-f474-4813-af44-bdc11189ecb3 container dapi-container: STEP: delete the pod May 24 00:36:23.480: INFO: Waiting for pod var-expansion-115d978b-f474-4813-af44-bdc11189ecb3 to disappear May 24 00:36:23.495: INFO: Pod var-expansion-115d978b-f474-4813-af44-bdc11189ecb3 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:36:23.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-669" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":288,"completed":208,"skipped":3546,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:36:23.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 24 00:36:23.650: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7514 /api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-label-changed 7d62ef7d-8f05-4fb3-841b-44e6757eb952 7162809 0 2020-05-24 00:36:23 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-24 00:36:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 24 00:36:23.651: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7514 /api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-label-changed 7d62ef7d-8f05-4fb3-841b-44e6757eb952 7162811 0 2020-05-24 00:36:23 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-24 00:36:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 00:36:23.651: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7514 /api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-label-changed 7d62ef7d-8f05-4fb3-841b-44e6757eb952 7162812 0 2020-05-24 00:36:23 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-24 00:36:23 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 24 00:36:33.689: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7514 /api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-label-changed 7d62ef7d-8f05-4fb3-841b-44e6757eb952 7162896 0 2020-05-24 00:36:23 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-24 00:36:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 00:36:33.690: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7514 /api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-label-changed 7d62ef7d-8f05-4fb3-841b-44e6757eb952 7162897 0 2020-05-24 00:36:23 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-24 00:36:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 00:36:33.690: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7514 /api/v1/namespaces/watch-7514/configmaps/e2e-watch-test-label-changed 7d62ef7d-8f05-4fb3-841b-44e6757eb952 7162898 0 2020-05-24 00:36:23 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-05-24 00:36:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:36:33.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7514" for this suite. • [SLOW TEST:10.195 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":288,"completed":209,"skipped":3568,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:36:33.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 24 00:36:41.872: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 00:36:41.884: INFO: Pod pod-with-poststart-http-hook still exists May 24 00:36:43.884: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 00:36:43.943: INFO: Pod pod-with-poststart-http-hook still exists May 24 00:36:45.885: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 00:36:45.890: INFO: Pod pod-with-poststart-http-hook still exists May 24 00:36:47.885: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 00:36:47.890: INFO: Pod pod-with-poststart-http-hook still exists May 24 00:36:49.884: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 00:36:49.889: INFO: Pod pod-with-poststart-http-hook still exists May 24 00:36:51.884: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 00:36:51.890: INFO: Pod pod-with-poststart-http-hook still exists May 24 00:36:53.885: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 00:36:53.889: INFO: Pod pod-with-poststart-http-hook still exists May 24 00:36:55.885: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 24 00:36:55.889: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:36:55.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6877" for this suite. • [SLOW TEST:22.201 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":288,"completed":210,"skipped":3570,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:36:55.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0524 00:36:57.038528 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 00:36:57.038: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:36:57.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1513" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":288,"completed":211,"skipped":3580,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:36:57.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1523 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 24 00:36:57.204: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9231' May 24 00:37:01.480: INFO: stderr: "" May 24 00:37:01.480: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1528 May 24 00:37:01.578: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9231' May 24 00:37:05.320: INFO: stderr: "" May 24 00:37:05.320: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:37:05.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9231" for this suite. • [SLOW TEST:8.340 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1519 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":288,"completed":212,"skipped":3584,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:37:05.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 24 00:37:09.508: INFO: &Pod{ObjectMeta:{send-events-d96dc6a0-9ce3-46c1-b105-c90358411c05 events-9438 /api/v1/namespaces/events-9438/pods/send-events-d96dc6a0-9ce3-46c1-b105-c90358411c05 42f6501d-94c6-4d7c-be0c-91666be4c1d8 7163105 0 2020-05-24 00:37:05 +0000 UTC map[name:foo time:435786838] map[] [] [] [{e2e.test Update v1 2020-05-24 00:37:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-05-24 00:37:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.206\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5ng8h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5ng8h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5ng8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:37:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:37:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:37:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-24 00:37:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.2.206,StartTime:2020-05-24 00:37:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-24 00:37:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:6d5c9e684dd8f91cc36601933d51b91768d0606593de6820e19e5f194b0df1b9,ContainerID:containerd://f86fec2b32e1d51fe5e0caab8c31d098682db72b146d1698bcce62cdd3031b41,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 24 00:37:11.513: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 24 00:37:13.518: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:37:13.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9438" for this suite. • [SLOW TEST:8.173 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":288,"completed":213,"skipped":3587,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:37:13.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:37:24.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7189" for this suite. • [SLOW TEST:11.227 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":288,"completed":214,"skipped":3620,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:37:24.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-c0c6bfc7-ebb6-4807-8cc3-a88dd19a2c8d STEP: Creating a pod to test consume secrets May 24 00:37:24.935: INFO: Waiting up to 5m0s for pod "pod-secrets-95ea325d-d39b-4105-9b3a-0589ba962b0a" in namespace "secrets-5026" to be "Succeeded or Failed" May 24 00:37:24.939: INFO: Pod "pod-secrets-95ea325d-d39b-4105-9b3a-0589ba962b0a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.880862ms May 24 00:37:26.943: INFO: Pod "pod-secrets-95ea325d-d39b-4105-9b3a-0589ba962b0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008276599s May 24 00:37:28.948: INFO: Pod "pod-secrets-95ea325d-d39b-4105-9b3a-0589ba962b0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012806654s STEP: Saw pod success May 24 00:37:28.948: INFO: Pod "pod-secrets-95ea325d-d39b-4105-9b3a-0589ba962b0a" satisfied condition "Succeeded or Failed" May 24 00:37:28.951: INFO: Trying to get logs from node latest-worker pod pod-secrets-95ea325d-d39b-4105-9b3a-0589ba962b0a container secret-env-test: STEP: delete the pod May 24 00:37:28.994: INFO: Waiting for pod pod-secrets-95ea325d-d39b-4105-9b3a-0589ba962b0a to disappear May 24 00:37:29.005: INFO: Pod pod-secrets-95ea325d-d39b-4105-9b3a-0589ba962b0a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:37:29.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5026" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":288,"completed":215,"skipped":3633,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:37:29.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:37:29.134: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:37:35.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1807" for this suite. • [SLOW TEST:6.438 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":288,"completed":216,"skipped":3649,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:37:35.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 24 00:37:35.585: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9bb527f-779e-41bc-977f-956c34209eac" in namespace "projected-2262" to be "Succeeded or Failed" May 24 00:37:35.686: INFO: Pod "downwardapi-volume-c9bb527f-779e-41bc-977f-956c34209eac": Phase="Pending", Reason="", readiness=false. Elapsed: 100.476204ms May 24 00:37:37.744: INFO: Pod "downwardapi-volume-c9bb527f-779e-41bc-977f-956c34209eac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158898905s May 24 00:37:39.763: INFO: Pod "downwardapi-volume-c9bb527f-779e-41bc-977f-956c34209eac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.177567855s STEP: Saw pod success May 24 00:37:39.763: INFO: Pod "downwardapi-volume-c9bb527f-779e-41bc-977f-956c34209eac" satisfied condition "Succeeded or Failed" May 24 00:37:39.765: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c9bb527f-779e-41bc-977f-956c34209eac container client-container: STEP: delete the pod May 24 00:37:39.798: INFO: Waiting for pod downwardapi-volume-c9bb527f-779e-41bc-977f-956c34209eac to disappear May 24 00:37:39.804: INFO: Pod downwardapi-volume-c9bb527f-779e-41bc-977f-956c34209eac no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:37:39.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2262" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":217,"skipped":3661,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:37:39.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:37:40.542: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:37:42.554: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725877460, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725877460, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725877460, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725877460, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:37:45.632: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 24 00:37:45.655: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:37:45.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8403" for this suite. STEP: Destroying namespace "webhook-8403-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.960 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":288,"completed":218,"skipped":3662,"failed":0} [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:37:45.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:37:50.027: INFO: Waiting up to 5m0s for pod "client-envvars-60621309-b02d-49b3-83f9-429d970d0471" in namespace "pods-2412" to be "Succeeded or Failed" May 24 00:37:50.033: INFO: Pod "client-envvars-60621309-b02d-49b3-83f9-429d970d0471": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271782ms May 24 00:37:52.037: INFO: Pod "client-envvars-60621309-b02d-49b3-83f9-429d970d0471": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010491135s May 24 00:37:54.043: INFO: Pod "client-envvars-60621309-b02d-49b3-83f9-429d970d0471": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016392065s STEP: Saw pod success May 24 00:37:54.043: INFO: Pod "client-envvars-60621309-b02d-49b3-83f9-429d970d0471" satisfied condition "Succeeded or Failed" May 24 00:37:54.050: INFO: Trying to get logs from node latest-worker pod client-envvars-60621309-b02d-49b3-83f9-429d970d0471 container env3cont: STEP: delete the pod May 24 00:37:54.085: INFO: Waiting for pod client-envvars-60621309-b02d-49b3-83f9-429d970d0471 to disappear May 24 00:37:54.104: INFO: Pod client-envvars-60621309-b02d-49b3-83f9-429d970d0471 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:37:54.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2412" for this suite. • [SLOW TEST:8.339 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":288,"completed":219,"skipped":3662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:37:54.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-5104 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5104 to expose endpoints map[] May 24 00:37:54.255: INFO: Get endpoints failed (15.632561ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 24 00:37:55.258: INFO: successfully validated that service endpoint-test2 in namespace services-5104 exposes endpoints map[] (1.019353626s elapsed) STEP: Creating pod pod1 in namespace services-5104 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5104 to expose endpoints map[pod1:[80]] May 24 00:37:59.644: INFO: successfully validated that service endpoint-test2 in namespace services-5104 exposes endpoints map[pod1:[80]] (4.348964869s elapsed) STEP: Creating pod pod2 in namespace services-5104 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5104 to expose endpoints map[pod1:[80] pod2:[80]] May 24 00:38:03.819: INFO: successfully validated that service endpoint-test2 in namespace services-5104 exposes endpoints map[pod1:[80] pod2:[80]] (4.157611087s elapsed) STEP: Deleting pod pod1 in namespace services-5104 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5104 to expose endpoints map[pod2:[80]] May 24 00:38:04.973: INFO: successfully validated that service endpoint-test2 in namespace services-5104 exposes endpoints map[pod2:[80]] (1.149641398s elapsed) STEP: Deleting pod pod2 in namespace services-5104 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5104 to expose endpoints map[] May 24 00:38:05.010: INFO: successfully validated that service endpoint-test2 in namespace services-5104 exposes endpoints map[] (30.594924ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:38:05.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5104" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.060 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":288,"completed":220,"skipped":3705,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:38:05.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 24 00:38:09.316: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:38:09.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7708" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":221,"skipped":3705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:38:09.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 24 00:38:09.489: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4180 /api/v1/namespaces/watch-4180/configmaps/e2e-watch-test-configmap-a 749fe447-7bf7-4d3d-8628-42678596a914 7163615 0 2020-05-24 00:38:09 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-24 00:38:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 24 00:38:09.489: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4180 /api/v1/namespaces/watch-4180/configmaps/e2e-watch-test-configmap-a 749fe447-7bf7-4d3d-8628-42678596a914 7163615 0 2020-05-24 00:38:09 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-24 00:38:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 24 00:38:19.498: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4180 /api/v1/namespaces/watch-4180/configmaps/e2e-watch-test-configmap-a 749fe447-7bf7-4d3d-8628-42678596a914 7163672 0 2020-05-24 00:38:09 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-24 00:38:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 00:38:19.499: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4180 /api/v1/namespaces/watch-4180/configmaps/e2e-watch-test-configmap-a 749fe447-7bf7-4d3d-8628-42678596a914 7163672 0 2020-05-24 00:38:09 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-24 00:38:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 24 00:38:29.508: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4180 /api/v1/namespaces/watch-4180/configmaps/e2e-watch-test-configmap-a 749fe447-7bf7-4d3d-8628-42678596a914 7163700 0 2020-05-24 00:38:09 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-24 00:38:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 00:38:29.509: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4180 /api/v1/namespaces/watch-4180/configmaps/e2e-watch-test-configmap-a 749fe447-7bf7-4d3d-8628-42678596a914 7163700 0 2020-05-24 00:38:09 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-24 00:38:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 24 00:38:39.518: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4180 /api/v1/namespaces/watch-4180/configmaps/e2e-watch-test-configmap-a 749fe447-7bf7-4d3d-8628-42678596a914 7163730 0 2020-05-24 00:38:09 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-24 00:38:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 24 00:38:39.518: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4180 /api/v1/namespaces/watch-4180/configmaps/e2e-watch-test-configmap-a 749fe447-7bf7-4d3d-8628-42678596a914 7163730 0 2020-05-24 00:38:09 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-05-24 00:38:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 24 00:38:49.535: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4180 /api/v1/namespaces/watch-4180/configmaps/e2e-watch-test-configmap-b 1bf45862-1086-4f54-85d3-b19497a6539b 7163761 0 2020-05-24 00:38:49 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-24 00:38:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 24 00:38:49.535: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4180 /api/v1/namespaces/watch-4180/configmaps/e2e-watch-test-configmap-b 1bf45862-1086-4f54-85d3-b19497a6539b 7163761 0 2020-05-24 00:38:49 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-24 00:38:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 24 00:38:59.543: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4180 /api/v1/namespaces/watch-4180/configmaps/e2e-watch-test-configmap-b 1bf45862-1086-4f54-85d3-b19497a6539b 7163789 0 2020-05-24 00:38:49 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-24 00:38:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 24 00:38:59.543: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4180 /api/v1/namespaces/watch-4180/configmaps/e2e-watch-test-configmap-b 1bf45862-1086-4f54-85d3-b19497a6539b 7163789 0 2020-05-24 00:38:49 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-05-24 00:38:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:39:09.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4180" for this suite. • [SLOW TEST:60.153 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":288,"completed":222,"skipped":3728,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:39:09.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components May 24 00:39:09.601: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 24 00:39:09.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-793' May 24 00:39:10.108: INFO: stderr: "" May 24 00:39:10.108: INFO: stdout: "service/agnhost-slave created\n" May 24 00:39:10.108: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 24 00:39:10.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-793' May 24 00:39:10.622: INFO: stderr: "" May 24 00:39:10.622: INFO: stdout: "service/agnhost-master created\n" May 24 00:39:10.624: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 24 00:39:10.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-793' May 24 00:39:11.451: INFO: stderr: "" May 24 00:39:11.451: INFO: stdout: "service/frontend created\n" May 24 00:39:11.451: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 24 00:39:11.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-793' May 24 00:39:11.750: INFO: stderr: "" May 24 00:39:11.750: INFO: stdout: "deployment.apps/frontend created\n" May 24 00:39:11.750: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 24 00:39:11.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-793' May 24 00:39:12.066: INFO: stderr: "" May 24 00:39:12.066: INFO: stdout: "deployment.apps/agnhost-master created\n" May 24 00:39:12.066: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 24 00:39:12.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-793' May 24 00:39:12.402: INFO: stderr: "" May 24 00:39:12.402: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 24 00:39:12.402: INFO: Waiting for all frontend pods to be Running. May 24 00:39:22.453: INFO: Waiting for frontend to serve content. May 24 00:39:22.464: INFO: Trying to add a new entry to the guestbook. May 24 00:39:22.475: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 24 00:39:22.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-793' May 24 00:39:22.654: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 00:39:22.654: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 24 00:39:22.655: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-793' May 24 00:39:22.841: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 00:39:22.841: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 24 00:39:22.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-793' May 24 00:39:22.975: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 00:39:22.975: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 24 00:39:22.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-793' May 24 00:39:23.077: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 00:39:23.077: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 24 00:39:23.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-793' May 24 00:39:23.227: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 00:39:23.228: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 24 00:39:23.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-793' May 24 00:39:23.547: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 24 00:39:23.547: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:39:23.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-793" for this suite. • [SLOW TEST:14.442 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":288,"completed":223,"skipped":3755,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:39:23.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command May 24 00:39:24.488: INFO: Waiting up to 5m0s for pod "client-containers-688715cb-faa6-4f1c-a989-4e4643f7eb50" in namespace "containers-416" to be "Succeeded or Failed" May 24 00:39:24.493: INFO: Pod "client-containers-688715cb-faa6-4f1c-a989-4e4643f7eb50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039989ms May 24 00:39:26.572: INFO: Pod "client-containers-688715cb-faa6-4f1c-a989-4e4643f7eb50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0837562s May 24 00:39:28.576: INFO: Pod "client-containers-688715cb-faa6-4f1c-a989-4e4643f7eb50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087686495s May 24 00:39:30.582: INFO: Pod "client-containers-688715cb-faa6-4f1c-a989-4e4643f7eb50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093223335s STEP: Saw pod success May 24 00:39:30.582: INFO: Pod "client-containers-688715cb-faa6-4f1c-a989-4e4643f7eb50" satisfied condition "Succeeded or Failed" May 24 00:39:30.585: INFO: Trying to get logs from node latest-worker pod client-containers-688715cb-faa6-4f1c-a989-4e4643f7eb50 container test-container: STEP: delete the pod May 24 00:39:30.648: INFO: Waiting for pod client-containers-688715cb-faa6-4f1c-a989-4e4643f7eb50 to disappear May 24 00:39:30.661: INFO: Pod client-containers-688715cb-faa6-4f1c-a989-4e4643f7eb50 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:39:30.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-416" for this suite. • [SLOW TEST:6.673 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":288,"completed":224,"skipped":3765,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:39:30.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 May 24 00:39:30.803: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 24 00:39:30.865: INFO: Waiting for terminating namespaces to be deleted... May 24 00:39:30.868: INFO: Logging pods the apiserver thinks is on node latest-worker before test May 24 00:39:30.875: INFO: rally-c184502e-30nwopzm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:25 +0000 UTC (1 container statuses recorded) May 24 00:39:30.875: INFO: Container rally-c184502e-30nwopzm ready: true, restart count 0 May 24 00:39:30.875: INFO: rally-c184502e-30nwopzm-7fmqm from c-rally-c184502e-zuy338to started at 2020-05-11 08:48:29 +0000 UTC (1 container statuses recorded) May 24 00:39:30.875: INFO: Container rally-c184502e-30nwopzm ready: false, restart count 0 May 24 00:39:30.875: INFO: kindnet-hg2tf from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 24 00:39:30.875: INFO: Container kindnet-cni ready: true, restart count 0 May 24 00:39:30.875: INFO: kube-proxy-c8n27 from kube-system started at 2020-04-29 09:54:13 +0000 UTC (1 container statuses recorded) May 24 00:39:30.875: INFO: Container kube-proxy ready: true, restart count 0 May 24 00:39:30.875: INFO: agnhost-slave-6ccc7fb55-czf5d from kubectl-793 started at 2020-05-24 00:39:12 +0000 UTC (1 container statuses recorded) May 24 00:39:30.875: INFO: Container slave ready: false, restart count 0 May 24 00:39:30.875: INFO: frontend-6d7c5ddd5b-28h58 from kubectl-793 started at 2020-05-24 00:39:11 +0000 UTC (1 container statuses recorded) May 24 00:39:30.875: INFO: Container guestbook-frontend ready: false, restart count 0 May 24 00:39:30.875: INFO: frontend-6d7c5ddd5b-fdllj from kubectl-793 started at 2020-05-24 00:39:11 +0000 UTC (1 container statuses recorded) May 24 00:39:30.875: INFO: Container guestbook-frontend ready: false, restart count 0 May 24 00:39:30.875: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test May 24 00:39:30.880: INFO: rally-c184502e-ept97j69-6xvbj from c-rally-c184502e-2luhd3t4 started at 2020-05-11 08:48:03 +0000 UTC (1 container statuses recorded) May 24 00:39:30.880: INFO: Container rally-c184502e-ept97j69 ready: false, restart count 0 May 24 00:39:30.880: INFO: terminate-cmd-rpa297bb112-e54d-4fcd-9997-b59cbf421a58 from container-runtime-7090 started at 2020-05-12 09:11:35 +0000 UTC (1 container statuses recorded) May 24 00:39:30.880: INFO: Container terminate-cmd-rpa ready: true, restart count 2 May 24 00:39:30.880: INFO: kindnet-jl4dn from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 24 00:39:30.880: INFO: Container kindnet-cni ready: true, restart count 0 May 24 00:39:30.880: INFO: kube-proxy-pcmmp from kube-system started at 2020-04-29 09:54:11 +0000 UTC (1 container statuses recorded) May 24 00:39:30.880: INFO: Container kube-proxy ready: true, restart count 0 May 24 00:39:30.880: INFO: agnhost-master-747788dd-fjdkg from kubectl-793 started at 2020-05-24 00:39:12 +0000 UTC (1 container statuses recorded) May 24 00:39:30.880: INFO: Container master ready: false, restart count 0 May 24 00:39:30.880: INFO: agnhost-slave-6ccc7fb55-nx8fj from kubectl-793 started at 2020-05-24 00:39:12 +0000 UTC (1 container statuses recorded) May 24 00:39:30.880: INFO: Container slave ready: false, restart count 0 May 24 00:39:30.880: INFO: frontend-6d7c5ddd5b-l5mhh from kubectl-793 started at 2020-05-24 00:39:11 +0000 UTC (1 container statuses recorded) May 24 00:39:30.880: INFO: Container guestbook-frontend ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-b676adaf-8cc7-4f9c-b7ba-8aa291a6d391 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-b676adaf-8cc7-4f9c-b7ba-8aa291a6d391 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-b676adaf-8cc7-4f9c-b7ba-8aa291a6d391 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:39:47.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9901" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.680 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":288,"completed":225,"skipped":3782,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:39:47.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-eaa0d758-f95d-45b3-8cb7-436001f655c3 STEP: Creating secret with name s-test-opt-upd-c310fea1-9dd0-4931-b507-c4b5962c476a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-eaa0d758-f95d-45b3-8cb7-436001f655c3 STEP: Updating secret s-test-opt-upd-c310fea1-9dd0-4931-b507-c4b5962c476a STEP: Creating secret with name s-test-opt-create-4ba4ed63-a547-43c6-b69c-6c4669f2939d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:41:22.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6189" for this suite. • [SLOW TEST:95.526 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":288,"completed":226,"skipped":3787,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:41:22.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:41:22.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 24 00:41:23.609: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T00:41:23Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-24T00:41:23Z]] name:name1 resourceVersion:7164546 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:883e5b5c-8fa7-4a80-b412-d55f16758685] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 24 00:41:33.615: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T00:41:33Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-24T00:41:33Z]] name:name2 resourceVersion:7164593 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:e7b84d15-1519-48f9-812f-182f76e7670f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 24 00:41:43.626: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T00:41:23Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-24T00:41:43Z]] name:name1 resourceVersion:7164625 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:883e5b5c-8fa7-4a80-b412-d55f16758685] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 24 00:41:53.632: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T00:41:33Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-24T00:41:53Z]] name:name2 resourceVersion:7164657 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:e7b84d15-1519-48f9-812f-182f76e7670f] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 24 00:42:03.655: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T00:41:23Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-24T00:41:43Z]] name:name1 resourceVersion:7164688 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:883e5b5c-8fa7-4a80-b412-d55f16758685] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 24 00:42:13.673: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-24T00:41:33Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-24T00:41:53Z]] name:name2 resourceVersion:7164718 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:e7b84d15-1519-48f9-812f-182f76e7670f] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:42:24.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-6144" for this suite. • [SLOW TEST:61.319 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":288,"completed":227,"skipped":3792,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:42:24.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 24 00:42:25.294: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 24 00:42:27.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725877745, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725877745, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725877745, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725877745, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-75dd644756\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:42:30.586: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 24 00:42:34.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config attach --namespace=webhook-1396 to-be-attached-pod -i -c=container1' May 24 00:42:34.836: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:42:34.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1396" for this suite. STEP: Destroying namespace "webhook-1396-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.814 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":288,"completed":228,"skipped":3823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:42:35.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC May 24 00:42:35.052: INFO: namespace kubectl-8294 May 24 00:42:35.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8294' May 24 00:42:35.622: INFO: stderr: "" May 24 00:42:35.622: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 24 00:42:36.627: INFO: Selector matched 1 pods for map[app:agnhost] May 24 00:42:36.627: INFO: Found 0 / 1 May 24 00:42:37.626: INFO: Selector matched 1 pods for map[app:agnhost] May 24 00:42:37.626: INFO: Found 0 / 1 May 24 00:42:38.646: INFO: Selector matched 1 pods for map[app:agnhost] May 24 00:42:38.646: INFO: Found 0 / 1 May 24 00:42:39.627: INFO: Selector matched 1 pods for map[app:agnhost] May 24 00:42:39.627: INFO: Found 1 / 1 May 24 00:42:39.627: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 24 00:42:39.629: INFO: Selector matched 1 pods for map[app:agnhost] May 24 00:42:39.629: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 24 00:42:39.629: INFO: wait on agnhost-master startup in kubectl-8294 May 24 00:42:39.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs agnhost-master-q56cd agnhost-master --namespace=kubectl-8294' May 24 00:42:39.747: INFO: stderr: "" May 24 00:42:39.747: INFO: stdout: "Paused\n" STEP: exposing RC May 24 00:42:39.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8294' May 24 00:42:39.947: INFO: stderr: "" May 24 00:42:39.947: INFO: stdout: "service/rm2 exposed\n" May 24 00:42:40.010: INFO: Service rm2 in namespace kubectl-8294 found. STEP: exposing service May 24 00:42:42.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8294' May 24 00:42:42.158: INFO: stderr: "" May 24 00:42:42.158: INFO: stdout: "service/rm3 exposed\n" May 24 00:42:42.165: INFO: Service rm3 in namespace kubectl-8294 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:42:44.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8294" for this suite. • [SLOW TEST:9.169 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1224 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":288,"completed":229,"skipped":3854,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:42:44.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-1f9ee433-90d2-4b23-b621-0ad5c646d1b6 STEP: Creating secret with name secret-projected-all-test-volume-ecd780dd-2a5c-42b3-b609-26cc01707c0c STEP: Creating a pod to test Check all projections for projected volume plugin May 24 00:42:44.329: INFO: Waiting up to 5m0s for pod "projected-volume-3a2c053c-26e1-4f16-b57e-505ff0227c4a" in namespace "projected-2009" to be "Succeeded or Failed" May 24 00:42:44.333: INFO: Pod "projected-volume-3a2c053c-26e1-4f16-b57e-505ff0227c4a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.764437ms May 24 00:42:46.337: INFO: Pod "projected-volume-3a2c053c-26e1-4f16-b57e-505ff0227c4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007209686s May 24 00:42:48.341: INFO: Pod "projected-volume-3a2c053c-26e1-4f16-b57e-505ff0227c4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01164154s STEP: Saw pod success May 24 00:42:48.341: INFO: Pod "projected-volume-3a2c053c-26e1-4f16-b57e-505ff0227c4a" satisfied condition "Succeeded or Failed" May 24 00:42:48.343: INFO: Trying to get logs from node latest-worker pod projected-volume-3a2c053c-26e1-4f16-b57e-505ff0227c4a container projected-all-volume-test: STEP: delete the pod May 24 00:42:48.457: INFO: Waiting for pod projected-volume-3a2c053c-26e1-4f16-b57e-505ff0227c4a to disappear May 24 00:42:48.471: INFO: Pod projected-volume-3a2c053c-26e1-4f16-b57e-505ff0227c4a no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:42:48.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2009" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":288,"completed":230,"skipped":3871,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:42:48.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs May 24 00:42:48.539: INFO: Waiting up to 5m0s for pod "pod-18502a6f-3cd2-4f55-b589-29a4096ee0a2" in namespace "emptydir-33" to be "Succeeded or Failed" May 24 00:42:48.543: INFO: Pod "pod-18502a6f-3cd2-4f55-b589-29a4096ee0a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01692ms May 24 00:42:50.694: INFO: Pod "pod-18502a6f-3cd2-4f55-b589-29a4096ee0a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154986541s May 24 00:42:52.697: INFO: Pod "pod-18502a6f-3cd2-4f55-b589-29a4096ee0a2": Phase="Running", Reason="", readiness=true. Elapsed: 4.158482843s May 24 00:42:54.701: INFO: Pod "pod-18502a6f-3cd2-4f55-b589-29a4096ee0a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.162581053s STEP: Saw pod success May 24 00:42:54.702: INFO: Pod "pod-18502a6f-3cd2-4f55-b589-29a4096ee0a2" satisfied condition "Succeeded or Failed" May 24 00:42:54.704: INFO: Trying to get logs from node latest-worker pod pod-18502a6f-3cd2-4f55-b589-29a4096ee0a2 container test-container: STEP: delete the pod May 24 00:42:54.784: INFO: Waiting for pod pod-18502a6f-3cd2-4f55-b589-29a4096ee0a2 to disappear May 24 00:42:54.795: INFO: Pod pod-18502a6f-3cd2-4f55-b589-29a4096ee0a2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:42:54.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-33" for this suite. • [SLOW TEST:6.323 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":231,"skipped":3874,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:42:54.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5389 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 24 00:42:55.000: INFO: Found 0 stateful pods, waiting for 3 May 24 00:43:05.004: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 00:43:05.004: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 00:43:05.004: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 24 00:43:15.007: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 00:43:15.007: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 00:43:15.007: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 24 00:43:15.030: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 24 00:43:25.134: INFO: Updating stateful set ss2 May 24 00:43:25.159: INFO: Waiting for Pod statefulset-5389/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 24 00:43:35.731: INFO: Found 2 stateful pods, waiting for 3 May 24 00:43:45.737: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 00:43:45.737: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 00:43:45.737: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 24 00:43:45.763: INFO: Updating stateful set ss2 May 24 00:43:45.772: INFO: Waiting for Pod statefulset-5389/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 24 00:43:55.799: INFO: Updating stateful set ss2 May 24 00:43:55.848: INFO: Waiting for StatefulSet statefulset-5389/ss2 to complete update May 24 00:43:55.848: INFO: Waiting for Pod statefulset-5389/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 24 00:44:05.857: INFO: Deleting all statefulset in ns statefulset-5389 May 24 00:44:05.860: INFO: Scaling statefulset ss2 to 0 May 24 00:44:35.875: INFO: Waiting for statefulset status.replicas updated to 0 May 24 00:44:35.878: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:44:35.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5389" for this suite. • [SLOW TEST:101.099 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":288,"completed":232,"skipped":3881,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:44:35.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1044 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1044 I0524 00:44:36.070433 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1044, replica count: 2 I0524 00:44:39.120934 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 00:44:42.121292 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 00:44:42.121: INFO: Creating new exec pod May 24 00:44:47.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1044 execpodm9vkq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 24 00:44:47.538: INFO: stderr: "I0524 00:44:47.291815 3150 log.go:172] (0xc00003a210) (0xc000722000) Create stream\nI0524 00:44:47.292065 3150 log.go:172] (0xc00003a210) (0xc000722000) Stream added, broadcasting: 1\nI0524 00:44:47.295345 3150 log.go:172] (0xc00003a210) Reply frame received for 1\nI0524 00:44:47.295378 3150 log.go:172] (0xc00003a210) (0xc000722fa0) Create stream\nI0524 00:44:47.295389 3150 log.go:172] (0xc00003a210) (0xc000722fa0) Stream added, broadcasting: 3\nI0524 00:44:47.296213 3150 log.go:172] (0xc00003a210) Reply frame received for 3\nI0524 00:44:47.296247 3150 log.go:172] (0xc00003a210) (0xc0006f2640) Create stream\nI0524 00:44:47.296262 3150 log.go:172] (0xc00003a210) (0xc0006f2640) Stream added, broadcasting: 5\nI0524 00:44:47.297396 3150 log.go:172] (0xc00003a210) Reply frame received for 5\nI0524 00:44:47.531477 3150 log.go:172] (0xc00003a210) Data frame received for 5\nI0524 00:44:47.531505 3150 log.go:172] (0xc0006f2640) (5) Data frame handling\nI0524 00:44:47.531524 3150 log.go:172] (0xc0006f2640) (5) Data frame sent\nI0524 00:44:47.531533 3150 log.go:172] (0xc00003a210) Data frame received for 5\nI0524 00:44:47.531540 3150 log.go:172] (0xc0006f2640) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0524 00:44:47.531560 3150 log.go:172] (0xc0006f2640) (5) Data frame sent\nI0524 00:44:47.531726 3150 log.go:172] (0xc00003a210) Data frame received for 5\nI0524 00:44:47.531743 3150 log.go:172] (0xc0006f2640) (5) Data frame handling\nI0524 00:44:47.531768 3150 log.go:172] (0xc00003a210) Data frame received for 3\nI0524 00:44:47.531787 3150 log.go:172] (0xc000722fa0) (3) Data frame handling\nI0524 00:44:47.533376 3150 log.go:172] (0xc00003a210) Data frame received for 1\nI0524 00:44:47.533392 3150 log.go:172] (0xc000722000) (1) Data frame handling\nI0524 00:44:47.533403 3150 log.go:172] (0xc000722000) (1) Data frame sent\nI0524 00:44:47.533466 3150 log.go:172] (0xc00003a210) (0xc000722000) Stream removed, broadcasting: 1\nI0524 00:44:47.533703 3150 log.go:172] (0xc00003a210) Go away received\nI0524 00:44:47.533722 3150 log.go:172] (0xc00003a210) (0xc000722000) Stream removed, broadcasting: 1\nI0524 00:44:47.533739 3150 log.go:172] (0xc00003a210) (0xc000722fa0) Stream removed, broadcasting: 3\nI0524 00:44:47.533752 3150 log.go:172] (0xc00003a210) (0xc0006f2640) Stream removed, broadcasting: 5\n" May 24 00:44:47.538: INFO: stdout: "" May 24 00:44:47.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-1044 execpodm9vkq -- /bin/sh -x -c nc -zv -t -w 2 10.96.240.234 80' May 24 00:44:47.723: INFO: stderr: "I0524 00:44:47.653259 3172 log.go:172] (0xc000ac7760) (0xc000825540) Create stream\nI0524 00:44:47.653310 3172 log.go:172] (0xc000ac7760) (0xc000825540) Stream added, broadcasting: 1\nI0524 00:44:47.658266 3172 log.go:172] (0xc000ac7760) Reply frame received for 1\nI0524 00:44:47.658311 3172 log.go:172] (0xc000ac7760) (0xc0006b0280) Create stream\nI0524 00:44:47.658324 3172 log.go:172] (0xc000ac7760) (0xc0006b0280) Stream added, broadcasting: 3\nI0524 00:44:47.659099 3172 log.go:172] (0xc000ac7760) Reply frame received for 3\nI0524 00:44:47.659139 3172 log.go:172] (0xc000ac7760) (0xc0006b0820) Create stream\nI0524 00:44:47.659153 3172 log.go:172] (0xc000ac7760) (0xc0006b0820) Stream added, broadcasting: 5\nI0524 00:44:47.659864 3172 log.go:172] (0xc000ac7760) Reply frame received for 5\nI0524 00:44:47.718219 3172 log.go:172] (0xc000ac7760) Data frame received for 5\nI0524 00:44:47.718244 3172 log.go:172] (0xc0006b0820) (5) Data frame handling\nI0524 00:44:47.718254 3172 log.go:172] (0xc0006b0820) (5) Data frame sent\nI0524 00:44:47.718259 3172 log.go:172] (0xc000ac7760) Data frame received for 5\nI0524 00:44:47.718263 3172 log.go:172] (0xc0006b0820) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.240.234 80\nConnection to 10.96.240.234 80 port [tcp/http] succeeded!\nI0524 00:44:47.718284 3172 log.go:172] (0xc000ac7760) Data frame received for 3\nI0524 00:44:47.718290 3172 log.go:172] (0xc0006b0280) (3) Data frame handling\nI0524 00:44:47.719615 3172 log.go:172] (0xc000ac7760) Data frame received for 1\nI0524 00:44:47.719648 3172 log.go:172] (0xc000825540) (1) Data frame handling\nI0524 00:44:47.719660 3172 log.go:172] (0xc000825540) (1) Data frame sent\nI0524 00:44:47.719670 3172 log.go:172] (0xc000ac7760) (0xc000825540) Stream removed, broadcasting: 1\nI0524 00:44:47.719691 3172 log.go:172] (0xc000ac7760) Go away received\nI0524 00:44:47.720092 3172 log.go:172] (0xc000ac7760) (0xc000825540) Stream removed, broadcasting: 1\nI0524 00:44:47.720110 3172 log.go:172] (0xc000ac7760) (0xc0006b0280) Stream removed, broadcasting: 3\nI0524 00:44:47.720118 3172 log.go:172] (0xc000ac7760) (0xc0006b0820) Stream removed, broadcasting: 5\n" May 24 00:44:47.723: INFO: stdout: "" May 24 00:44:47.723: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:44:47.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1044" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.934 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":288,"completed":233,"skipped":3889,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:44:47.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 24 00:44:48.333: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 24 00:44:50.344: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725877888, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725877888, loc:(*time.Location)(0x7c342a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725877888, loc:(*time.Location)(0x7c342a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725877888, loc:(*time.Location)(0x7c342a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-69bd8c6bb8\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 24 00:44:53.387: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:44:53.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:44:54.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9874" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.924 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":288,"completed":234,"skipped":3900,"failed":0} SSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:44:54.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 24 00:44:54.927: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 24 00:44:54.947: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 24 00:44:54.947: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 24 00:44:55.222: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 24 00:44:55.222: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 24 00:44:55.383: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 24 00:44:55.383: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 24 00:45:02.727: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:45:02.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-3901" for this suite. • [SLOW TEST:8.007 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":288,"completed":235,"skipped":3904,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:45:02.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:45:02.908: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-b8aecf16-1fa4-4f9a-bec5-938cd9166c2f" in namespace "security-context-test-259" to be "Succeeded or Failed" May 24 00:45:02.958: INFO: Pod "alpine-nnp-false-b8aecf16-1fa4-4f9a-bec5-938cd9166c2f": Phase="Pending", Reason="", readiness=false. Elapsed: 50.141092ms May 24 00:45:04.962: INFO: Pod "alpine-nnp-false-b8aecf16-1fa4-4f9a-bec5-938cd9166c2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053312359s May 24 00:45:06.964: INFO: Pod "alpine-nnp-false-b8aecf16-1fa4-4f9a-bec5-938cd9166c2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055793299s May 24 00:45:08.988: INFO: Pod "alpine-nnp-false-b8aecf16-1fa4-4f9a-bec5-938cd9166c2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079370635s May 24 00:45:08.988: INFO: Pod "alpine-nnp-false-b8aecf16-1fa4-4f9a-bec5-938cd9166c2f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:45:09.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-259" for this suite. • [SLOW TEST:6.520 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":236,"skipped":3912,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:45:09.287: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-25af9d6a-4368-4e0d-9f64-82aba214407a in namespace container-probe-9424 May 24 00:45:14.088: INFO: Started pod test-webserver-25af9d6a-4368-4e0d-9f64-82aba214407a in namespace container-probe-9424 STEP: checking the pod's current state and verifying that restartCount is present May 24 00:45:14.091: INFO: Initial restart count of pod test-webserver-25af9d6a-4368-4e0d-9f64-82aba214407a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:49:14.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9424" for this suite. • [SLOW TEST:245.628 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":288,"completed":237,"skipped":3913,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:49:14.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium May 24 00:49:15.027: INFO: Waiting up to 5m0s for pod "pod-f62b21ee-a9e6-4f3a-a46f-568fb12cde54" in namespace "emptydir-1690" to be "Succeeded or Failed" May 24 00:49:15.041: INFO: Pod "pod-f62b21ee-a9e6-4f3a-a46f-568fb12cde54": Phase="Pending", Reason="", readiness=false. Elapsed: 13.550203ms May 24 00:49:17.103: INFO: Pod "pod-f62b21ee-a9e6-4f3a-a46f-568fb12cde54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075844599s May 24 00:49:19.157: INFO: Pod "pod-f62b21ee-a9e6-4f3a-a46f-568fb12cde54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129897587s STEP: Saw pod success May 24 00:49:19.157: INFO: Pod "pod-f62b21ee-a9e6-4f3a-a46f-568fb12cde54" satisfied condition "Succeeded or Failed" May 24 00:49:19.160: INFO: Trying to get logs from node latest-worker pod pod-f62b21ee-a9e6-4f3a-a46f-568fb12cde54 container test-container: STEP: delete the pod May 24 00:49:19.211: INFO: Waiting for pod pod-f62b21ee-a9e6-4f3a-a46f-568fb12cde54 to disappear May 24 00:49:19.220: INFO: Pod pod-f62b21ee-a9e6-4f3a-a46f-568fb12cde54 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:49:19.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1690" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":238,"skipped":3925,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:49:19.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 24 00:49:19.336: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f68b8b7-7b9f-465d-9df7-7313d06e3492" in namespace "downward-api-7440" to be "Succeeded or Failed" May 24 00:49:19.340: INFO: Pod "downwardapi-volume-1f68b8b7-7b9f-465d-9df7-7313d06e3492": Phase="Pending", Reason="", readiness=false. Elapsed: 4.381719ms May 24 00:49:21.345: INFO: Pod "downwardapi-volume-1f68b8b7-7b9f-465d-9df7-7313d06e3492": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008913496s May 24 00:49:23.349: INFO: Pod "downwardapi-volume-1f68b8b7-7b9f-465d-9df7-7313d06e3492": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013474842s STEP: Saw pod success May 24 00:49:23.349: INFO: Pod "downwardapi-volume-1f68b8b7-7b9f-465d-9df7-7313d06e3492" satisfied condition "Succeeded or Failed" May 24 00:49:23.352: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-1f68b8b7-7b9f-465d-9df7-7313d06e3492 container client-container: STEP: delete the pod May 24 00:49:23.402: INFO: Waiting for pod downwardapi-volume-1f68b8b7-7b9f-465d-9df7-7313d06e3492 to disappear May 24 00:49:23.413: INFO: Pod downwardapi-volume-1f68b8b7-7b9f-465d-9df7-7313d06e3492 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:49:23.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7440" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":239,"skipped":3930,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:49:23.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod May 24 00:49:28.048: INFO: Successfully updated pod "annotationupdatecbaa12b5-8efb-47b9-b13f-4b2a3aba54fb" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:49:32.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6241" for this suite. • [SLOW TEST:8.675 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":288,"completed":240,"skipped":3956,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:49:32.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:49:43.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9429" for this suite. • [SLOW TEST:11.136 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":288,"completed":241,"skipped":3960,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:49:43.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:49:49.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6594" for this suite. STEP: Destroying namespace "nsdeletetest-9602" for this suite. May 24 00:49:49.599: INFO: Namespace nsdeletetest-9602 was already deleted STEP: Destroying namespace "nsdeletetest-2494" for this suite. • [SLOW TEST:6.370 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":288,"completed":242,"skipped":3968,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:49:49.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-f5831878-b552-4af9-8858-e559e83ddc58 in namespace container-probe-5867 May 24 00:49:53.740: INFO: Started pod busybox-f5831878-b552-4af9-8858-e559e83ddc58 in namespace container-probe-5867 STEP: checking the pod's current state and verifying that restartCount is present May 24 00:49:53.743: INFO: Initial restart count of pod busybox-f5831878-b552-4af9-8858-e559e83ddc58 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:53:54.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5867" for this suite. • [SLOW TEST:245.001 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":243,"skipped":3978,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:53:54.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6790 STEP: creating service affinity-clusterip-transition in namespace services-6790 STEP: creating replication controller affinity-clusterip-transition in namespace services-6790 I0524 00:53:54.749096 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-6790, replica count: 3 I0524 00:53:57.799689 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 00:54:00.799930 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 00:54:03.800118 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 00:54:03.806: INFO: Creating new exec pod May 24 00:54:08.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6790 execpod-affinity5zkwx -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' May 24 00:54:11.931: INFO: stderr: "I0524 00:54:11.845989 3191 log.go:172] (0xc00092a630) (0xc000847d60) Create stream\nI0524 00:54:11.846027 3191 log.go:172] (0xc00092a630) (0xc000847d60) Stream added, broadcasting: 1\nI0524 00:54:11.847721 3191 log.go:172] (0xc00092a630) Reply frame received for 1\nI0524 00:54:11.847767 3191 log.go:172] (0xc00092a630) (0xc0006eeb40) Create stream\nI0524 00:54:11.847777 3191 log.go:172] (0xc00092a630) (0xc0006eeb40) Stream added, broadcasting: 3\nI0524 00:54:11.848521 3191 log.go:172] (0xc00092a630) Reply frame received for 3\nI0524 00:54:11.848548 3191 log.go:172] (0xc00092a630) (0xc00083d360) Create stream\nI0524 00:54:11.848555 3191 log.go:172] (0xc00092a630) (0xc00083d360) Stream added, broadcasting: 5\nI0524 00:54:11.849769 3191 log.go:172] (0xc00092a630) Reply frame received for 5\nI0524 00:54:11.923787 3191 log.go:172] (0xc00092a630) Data frame received for 3\nI0524 00:54:11.923827 3191 log.go:172] (0xc0006eeb40) (3) Data frame handling\nI0524 00:54:11.923855 3191 log.go:172] (0xc00092a630) Data frame received for 5\nI0524 00:54:11.923867 3191 log.go:172] (0xc00083d360) (5) Data frame handling\nI0524 00:54:11.923884 3191 log.go:172] (0xc00083d360) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0524 00:54:11.924036 3191 log.go:172] (0xc00092a630) Data frame received for 5\nI0524 00:54:11.924076 3191 log.go:172] (0xc00083d360) (5) Data frame handling\nI0524 00:54:11.925627 3191 log.go:172] (0xc00092a630) Data frame received for 1\nI0524 00:54:11.925647 3191 log.go:172] (0xc000847d60) (1) Data frame handling\nI0524 00:54:11.925661 3191 log.go:172] (0xc000847d60) (1) Data frame sent\nI0524 00:54:11.925674 3191 log.go:172] (0xc00092a630) (0xc000847d60) Stream removed, broadcasting: 1\nI0524 00:54:11.925697 3191 log.go:172] (0xc00092a630) Go away received\nI0524 00:54:11.926034 3191 log.go:172] (0xc00092a630) (0xc000847d60) Stream removed, broadcasting: 1\nI0524 00:54:11.926064 3191 log.go:172] (0xc00092a630) (0xc0006eeb40) Stream removed, broadcasting: 3\nI0524 00:54:11.926076 3191 log.go:172] (0xc00092a630) (0xc00083d360) Stream removed, broadcasting: 5\n" May 24 00:54:11.931: INFO: stdout: "" May 24 00:54:11.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6790 execpod-affinity5zkwx -- /bin/sh -x -c nc -zv -t -w 2 10.96.80.42 80' May 24 00:54:12.159: INFO: stderr: "I0524 00:54:12.077569 3224 log.go:172] (0xc000b3f080) (0xc0004a6e60) Create stream\nI0524 00:54:12.077643 3224 log.go:172] (0xc000b3f080) (0xc0004a6e60) Stream added, broadcasting: 1\nI0524 00:54:12.080009 3224 log.go:172] (0xc000b3f080) Reply frame received for 1\nI0524 00:54:12.080058 3224 log.go:172] (0xc000b3f080) (0xc0004a7400) Create stream\nI0524 00:54:12.080075 3224 log.go:172] (0xc000b3f080) (0xc0004a7400) Stream added, broadcasting: 3\nI0524 00:54:12.081098 3224 log.go:172] (0xc000b3f080) Reply frame received for 3\nI0524 00:54:12.081275 3224 log.go:172] (0xc000b3f080) (0xc0004a7a40) Create stream\nI0524 00:54:12.081292 3224 log.go:172] (0xc000b3f080) (0xc0004a7a40) Stream added, broadcasting: 5\nI0524 00:54:12.082167 3224 log.go:172] (0xc000b3f080) Reply frame received for 5\nI0524 00:54:12.151186 3224 log.go:172] (0xc000b3f080) Data frame received for 5\nI0524 00:54:12.151211 3224 log.go:172] (0xc0004a7a40) (5) Data frame handling\nI0524 00:54:12.151226 3224 log.go:172] (0xc0004a7a40) (5) Data frame sent\nI0524 00:54:12.151232 3224 log.go:172] (0xc000b3f080) Data frame received for 5\nI0524 00:54:12.151238 3224 log.go:172] (0xc0004a7a40) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.80.42 80\nConnection to 10.96.80.42 80 port [tcp/http] succeeded!\nI0524 00:54:12.151276 3224 log.go:172] (0xc000b3f080) Data frame received for 3\nI0524 00:54:12.151319 3224 log.go:172] (0xc0004a7400) (3) Data frame handling\nI0524 00:54:12.153002 3224 log.go:172] (0xc000b3f080) Data frame received for 1\nI0524 00:54:12.153031 3224 log.go:172] (0xc0004a6e60) (1) Data frame handling\nI0524 00:54:12.153049 3224 log.go:172] (0xc0004a6e60) (1) Data frame sent\nI0524 00:54:12.153093 3224 log.go:172] (0xc000b3f080) (0xc0004a6e60) Stream removed, broadcasting: 1\nI0524 00:54:12.153322 3224 log.go:172] (0xc000b3f080) Go away received\nI0524 00:54:12.153626 3224 log.go:172] (0xc000b3f080) (0xc0004a6e60) Stream removed, broadcasting: 1\nI0524 00:54:12.153643 3224 log.go:172] (0xc000b3f080) (0xc0004a7400) Stream removed, broadcasting: 3\nI0524 00:54:12.153651 3224 log.go:172] (0xc000b3f080) (0xc0004a7a40) Stream removed, broadcasting: 5\n" May 24 00:54:12.159: INFO: stdout: "" May 24 00:54:12.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6790 execpod-affinity5zkwx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.80.42:80/ ; done' May 24 00:54:12.468: INFO: stderr: "I0524 00:54:12.303259 3244 log.go:172] (0xc000a520b0) (0xc0005da140) Create stream\nI0524 00:54:12.303321 3244 log.go:172] (0xc000a520b0) (0xc0005da140) Stream added, broadcasting: 1\nI0524 00:54:12.305417 3244 log.go:172] (0xc000a520b0) Reply frame received for 1\nI0524 00:54:12.305487 3244 log.go:172] (0xc000a520b0) (0xc000534c80) Create stream\nI0524 00:54:12.305514 3244 log.go:172] (0xc000a520b0) (0xc000534c80) Stream added, broadcasting: 3\nI0524 00:54:12.306449 3244 log.go:172] (0xc000a520b0) Reply frame received for 3\nI0524 00:54:12.306483 3244 log.go:172] (0xc000a520b0) (0xc00015d400) Create stream\nI0524 00:54:12.306496 3244 log.go:172] (0xc000a520b0) (0xc00015d400) Stream added, broadcasting: 5\nI0524 00:54:12.307360 3244 log.go:172] (0xc000a520b0) Reply frame received for 5\nI0524 00:54:12.373661 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.373706 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.373745 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.373839 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.373856 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.373868 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.378739 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.378770 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.378794 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.379775 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.379793 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.379800 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.379810 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.379816 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.379821 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.386519 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.386543 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.386560 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.386961 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.386981 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.386996 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.387186 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.387232 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.387276 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.391093 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.391128 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.391161 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.391537 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.391588 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.391615 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.391662 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.391680 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.391699 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.397098 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.397280 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.397302 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.397797 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.397818 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.397834 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.397863 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.397888 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.397927 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.402686 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.402704 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.402722 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.403325 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.403367 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.403395 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.403429 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.403449 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.403493 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.407189 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.407211 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.407241 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.407621 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.407636 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.407656 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.407673 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.407692 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.407715 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.411675 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.411697 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.411715 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.412084 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.412099 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.412107 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.412139 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.412168 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.412195 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.415924 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.415939 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.415953 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.416463 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.416496 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.416510 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.416530 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.416547 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.416571 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.420468 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.420497 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.420514 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.420909 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.420933 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.420952 3244 log.go:172] (0xc00015d400) (5) Data frame sent\nI0524 00:54:12.420971 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.420987 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.421002 3244 log.go:172] (0xc000534c80) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.424822 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.424850 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.424871 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.425400 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.425436 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.425466 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.425495 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.425507 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.425523 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.429464 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.429500 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.429532 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.429838 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.429852 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.429859 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.429870 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.429875 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.429882 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.436940 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.436960 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.436981 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.437487 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.437517 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.437546 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/I0524 00:54:12.437679 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.437707 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.437721 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n\nI0524 00:54:12.437939 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.437956 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.437981 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.441887 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.441914 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.441936 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.442751 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.442783 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.442799 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.442833 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.442849 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.442872 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.446062 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.446082 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.446104 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.446535 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.446559 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.446579 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.446620 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.446641 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.446683 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.451184 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.451207 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.451233 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.451624 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.451695 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.451712 3244 log.go:172] (0xc00015d400) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.451727 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.451740 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.451758 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.459181 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.459222 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.459257 3244 log.go:172] (0xc000534c80) (3) Data frame sent\nI0524 00:54:12.460092 3244 log.go:172] (0xc000a520b0) Data frame received for 5\nI0524 00:54:12.460117 3244 log.go:172] (0xc00015d400) (5) Data frame handling\nI0524 00:54:12.460286 3244 log.go:172] (0xc000a520b0) Data frame received for 3\nI0524 00:54:12.460312 3244 log.go:172] (0xc000534c80) (3) Data frame handling\nI0524 00:54:12.462999 3244 log.go:172] (0xc000a520b0) Data frame received for 1\nI0524 00:54:12.463036 3244 log.go:172] (0xc0005da140) (1) Data frame handling\nI0524 00:54:12.463052 3244 log.go:172] (0xc0005da140) (1) Data frame sent\nI0524 00:54:12.463075 3244 log.go:172] (0xc000a520b0) (0xc0005da140) Stream removed, broadcasting: 1\nI0524 00:54:12.463098 3244 log.go:172] (0xc000a520b0) Go away received\nI0524 00:54:12.463539 3244 log.go:172] (0xc000a520b0) (0xc0005da140) Stream removed, broadcasting: 1\nI0524 00:54:12.463559 3244 log.go:172] (0xc000a520b0) (0xc000534c80) Stream removed, broadcasting: 3\nI0524 00:54:12.463568 3244 log.go:172] (0xc000a520b0) (0xc00015d400) Stream removed, broadcasting: 5\n" May 24 00:54:12.468: INFO: stdout: "\naffinity-clusterip-transition-ngnxr\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-ngnxr\naffinity-clusterip-transition-ngnxr\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-ngnxr\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-jmvdx\naffinity-clusterip-transition-jmvdx\naffinity-clusterip-transition-ngnxr\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-jmvdx\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-jmvdx\naffinity-clusterip-transition-ngnxr" May 24 00:54:12.468: INFO: Received response from host: May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-ngnxr May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-ngnxr May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-ngnxr May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-ngnxr May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-jmvdx May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-jmvdx May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-ngnxr May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-jmvdx May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-jmvdx May 24 00:54:12.468: INFO: Received response from host: affinity-clusterip-transition-ngnxr May 24 00:54:12.478: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-6790 execpod-affinity5zkwx -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.80.42:80/ ; done' May 24 00:54:12.797: INFO: stderr: "I0524 00:54:12.628794 3265 log.go:172] (0xc000442d10) (0xc000a90460) Create stream\nI0524 00:54:12.628864 3265 log.go:172] (0xc000442d10) (0xc000a90460) Stream added, broadcasting: 1\nI0524 00:54:12.634154 3265 log.go:172] (0xc000442d10) Reply frame received for 1\nI0524 00:54:12.634193 3265 log.go:172] (0xc000442d10) (0xc0004f65a0) Create stream\nI0524 00:54:12.634204 3265 log.go:172] (0xc000442d10) (0xc0004f65a0) Stream added, broadcasting: 3\nI0524 00:54:12.635179 3265 log.go:172] (0xc000442d10) Reply frame received for 3\nI0524 00:54:12.635233 3265 log.go:172] (0xc000442d10) (0xc00044c1e0) Create stream\nI0524 00:54:12.635250 3265 log.go:172] (0xc000442d10) (0xc00044c1e0) Stream added, broadcasting: 5\nI0524 00:54:12.636071 3265 log.go:172] (0xc000442d10) Reply frame received for 5\nI0524 00:54:12.709500 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.709558 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.709580 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.709624 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.709642 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.709663 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.714209 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.714246 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.714277 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.714411 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.714440 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.714466 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.714545 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.714573 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.714613 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.720305 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.720329 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.720357 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.720791 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.720808 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.720825 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.720962 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.720996 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.721021 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.727652 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.727680 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.727703 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.728133 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.728147 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.728153 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\nI0524 00:54:12.728158 3265 log.go:172] (0xc000442d10) Data frame received for 5\n+ I0524 00:54:12.728168 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.728183 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.728190 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.728203 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.728211 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\nI0524 00:54:12.728218 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.728228 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\necho\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.728261 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\nI0524 00:54:12.734656 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.734683 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.734711 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.735583 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.735596 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.735604 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.735617 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.735645 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.735681 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\nI0524 00:54:12.735692 3265 log.go:172] (0xc000442d10) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0524 00:54:12.735700 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.735755 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\n http://10.96.80.42:80/\nI0524 00:54:12.738969 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.739003 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.739038 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.739347 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.739370 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.739396 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.739407 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.739423 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.739435 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.743132 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.743165 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.743197 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.743840 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.743931 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.743955 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.743978 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.743990 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.744002 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\nI0524 00:54:12.744016 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.744028 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.744053 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\nI0524 00:54:12.750072 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.750100 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.750117 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.750535 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.750555 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.750567 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.750586 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.750600 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.750612 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.754962 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.754977 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.754992 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.755227 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.755248 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.755256 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.755267 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.755276 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.755283 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.760352 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.760369 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.760382 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.760746 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.760767 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.760782 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.760789 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.760822 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.760888 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.765238 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.765257 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.765265 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.765838 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.765866 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.765899 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.765919 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.765931 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.765942 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.769956 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.769972 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.769983 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.770309 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.770335 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.770352 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\nI0524 00:54:12.770360 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.770367 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.770382 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\nI0524 00:54:12.770470 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.770490 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.770511 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.773870 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.773899 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.773918 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.774108 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.774135 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.774155 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\nI0524 00:54:12.774166 3265 log.go:172] (0xc000442d10) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/I0524 00:54:12.774180 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.774199 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\n\nI0524 00:54:12.774215 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.774222 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.774229 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.777840 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.777860 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.777869 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.777876 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.777882 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.777893 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.777910 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.777924 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.777953 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.781053 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.781072 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.781094 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.781702 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.781716 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.781723 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.781734 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.781740 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.781746 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.787146 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.787161 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.787173 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.787720 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.787745 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.787757 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.787767 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.787779 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.787799 3265 log.go:172] (0xc00044c1e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.80.42:80/\nI0524 00:54:12.790999 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.791033 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.791050 3265 log.go:172] (0xc0004f65a0) (3) Data frame sent\nI0524 00:54:12.791534 3265 log.go:172] (0xc000442d10) Data frame received for 5\nI0524 00:54:12.791553 3265 log.go:172] (0xc00044c1e0) (5) Data frame handling\nI0524 00:54:12.791837 3265 log.go:172] (0xc000442d10) Data frame received for 3\nI0524 00:54:12.791851 3265 log.go:172] (0xc0004f65a0) (3) Data frame handling\nI0524 00:54:12.793299 3265 log.go:172] (0xc000442d10) Data frame received for 1\nI0524 00:54:12.793317 3265 log.go:172] (0xc000a90460) (1) Data frame handling\nI0524 00:54:12.793331 3265 log.go:172] (0xc000a90460) (1) Data frame sent\nI0524 00:54:12.793344 3265 log.go:172] (0xc000442d10) (0xc000a90460) Stream removed, broadcasting: 1\nI0524 00:54:12.793499 3265 log.go:172] (0xc000442d10) Go away received\nI0524 00:54:12.793728 3265 log.go:172] (0xc000442d10) (0xc000a90460) Stream removed, broadcasting: 1\nI0524 00:54:12.793745 3265 log.go:172] (0xc000442d10) (0xc0004f65a0) Stream removed, broadcasting: 3\nI0524 00:54:12.793754 3265 log.go:172] (0xc000442d10) (0xc00044c1e0) Stream removed, broadcasting: 5\n" May 24 00:54:12.798: INFO: stdout: "\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h\naffinity-clusterip-transition-kpl5h" May 24 00:54:12.798: INFO: Received response from host: May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Received response from host: affinity-clusterip-transition-kpl5h May 24 00:54:12.798: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-6790, will wait for the garbage collector to delete the pods May 24 00:54:12.964: INFO: Deleting ReplicationController affinity-clusterip-transition took: 26.43176ms May 24 00:54:13.464: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 500.220408ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:54:25.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6790" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:30.815 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":288,"completed":244,"skipped":3989,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:54:25.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-57407ca2-8b37-43dd-ad5d-fc5f64d37fd5 STEP: Creating a pod to test consume configMaps May 24 00:54:25.583: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-62691ac7-9496-4b13-adae-01ec91da8610" in namespace "projected-1067" to be "Succeeded or Failed" May 24 00:54:25.587: INFO: Pod "pod-projected-configmaps-62691ac7-9496-4b13-adae-01ec91da8610": Phase="Pending", Reason="", readiness=false. Elapsed: 3.36448ms May 24 00:54:27.633: INFO: Pod "pod-projected-configmaps-62691ac7-9496-4b13-adae-01ec91da8610": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049245942s May 24 00:54:29.693: INFO: Pod "pod-projected-configmaps-62691ac7-9496-4b13-adae-01ec91da8610": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109623511s STEP: Saw pod success May 24 00:54:29.693: INFO: Pod "pod-projected-configmaps-62691ac7-9496-4b13-adae-01ec91da8610" satisfied condition "Succeeded or Failed" May 24 00:54:29.697: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-62691ac7-9496-4b13-adae-01ec91da8610 container projected-configmap-volume-test: STEP: delete the pod May 24 00:54:29.819: INFO: Waiting for pod pod-projected-configmaps-62691ac7-9496-4b13-adae-01ec91da8610 to disappear May 24 00:54:29.845: INFO: Pod pod-projected-configmaps-62691ac7-9496-4b13-adae-01ec91da8610 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:54:29.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1067" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":245,"skipped":4014,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:54:29.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 24 00:54:41.998: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1397 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:54:41.998: INFO: >>> kubeConfig: /root/.kube/config I0524 00:54:42.029279 7 log.go:172] (0xc00291c9a0) (0xc00248e3c0) Create stream I0524 00:54:42.029368 7 log.go:172] (0xc00291c9a0) (0xc00248e3c0) Stream added, broadcasting: 1 I0524 00:54:42.031907 7 log.go:172] (0xc00291c9a0) Reply frame received for 1 I0524 00:54:42.031946 7 log.go:172] (0xc00291c9a0) (0xc00248e460) Create stream I0524 00:54:42.031965 7 log.go:172] (0xc00291c9a0) (0xc00248e460) Stream added, broadcasting: 3 I0524 00:54:42.032974 7 log.go:172] (0xc00291c9a0) Reply frame received for 3 I0524 00:54:42.033029 7 log.go:172] (0xc00291c9a0) (0xc001bae0a0) Create stream I0524 00:54:42.033054 7 log.go:172] (0xc00291c9a0) (0xc001bae0a0) Stream added, broadcasting: 5 I0524 00:54:42.034474 7 log.go:172] (0xc00291c9a0) Reply frame received for 5 I0524 00:54:42.096334 7 log.go:172] (0xc00291c9a0) Data frame received for 5 I0524 00:54:42.096379 7 log.go:172] (0xc001bae0a0) (5) Data frame handling I0524 00:54:42.096418 7 log.go:172] (0xc00291c9a0) Data frame received for 3 I0524 00:54:42.096444 7 log.go:172] (0xc00248e460) (3) Data frame handling I0524 00:54:42.096460 7 log.go:172] (0xc00248e460) (3) Data frame sent I0524 00:54:42.096483 7 log.go:172] (0xc00291c9a0) Data frame received for 3 I0524 00:54:42.096496 7 log.go:172] (0xc00248e460) (3) Data frame handling I0524 00:54:42.098107 7 log.go:172] (0xc00291c9a0) Data frame received for 1 I0524 00:54:42.098132 7 log.go:172] (0xc00248e3c0) (1) Data frame handling I0524 00:54:42.098153 7 log.go:172] (0xc00248e3c0) (1) Data frame sent I0524 00:54:42.098170 7 log.go:172] (0xc00291c9a0) (0xc00248e3c0) Stream removed, broadcasting: 1 I0524 00:54:42.098190 7 log.go:172] (0xc00291c9a0) Go away received I0524 00:54:42.098351 7 log.go:172] (0xc00291c9a0) (0xc00248e3c0) Stream removed, broadcasting: 1 I0524 00:54:42.098394 7 log.go:172] (0xc00291c9a0) (0xc00248e460) Stream removed, broadcasting: 3 I0524 00:54:42.098412 7 log.go:172] (0xc00291c9a0) (0xc001bae0a0) Stream removed, broadcasting: 5 May 24 00:54:42.098: INFO: Exec stderr: "" May 24 00:54:42.098: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1397 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:54:42.098: INFO: >>> kubeConfig: /root/.kube/config I0524 00:54:42.125022 7 log.go:172] (0xc0035ea580) (0xc0028c0dc0) Create stream I0524 00:54:42.125045 7 log.go:172] (0xc0035ea580) (0xc0028c0dc0) Stream added, broadcasting: 1 I0524 00:54:42.127253 7 log.go:172] (0xc0035ea580) Reply frame received for 1 I0524 00:54:42.127300 7 log.go:172] (0xc0035ea580) (0xc00248e500) Create stream I0524 00:54:42.127317 7 log.go:172] (0xc0035ea580) (0xc00248e500) Stream added, broadcasting: 3 I0524 00:54:42.128167 7 log.go:172] (0xc0035ea580) Reply frame received for 3 I0524 00:54:42.128202 7 log.go:172] (0xc0035ea580) (0xc001780c80) Create stream I0524 00:54:42.128212 7 log.go:172] (0xc0035ea580) (0xc001780c80) Stream added, broadcasting: 5 I0524 00:54:42.129044 7 log.go:172] (0xc0035ea580) Reply frame received for 5 I0524 00:54:42.204481 7 log.go:172] (0xc0035ea580) Data frame received for 5 I0524 00:54:42.204522 7 log.go:172] (0xc001780c80) (5) Data frame handling I0524 00:54:42.204556 7 log.go:172] (0xc0035ea580) Data frame received for 3 I0524 00:54:42.204577 7 log.go:172] (0xc00248e500) (3) Data frame handling I0524 00:54:42.204594 7 log.go:172] (0xc00248e500) (3) Data frame sent I0524 00:54:42.204601 7 log.go:172] (0xc0035ea580) Data frame received for 3 I0524 00:54:42.204611 7 log.go:172] (0xc00248e500) (3) Data frame handling I0524 00:54:42.206111 7 log.go:172] (0xc0035ea580) Data frame received for 1 I0524 00:54:42.206129 7 log.go:172] (0xc0028c0dc0) (1) Data frame handling I0524 00:54:42.206137 7 log.go:172] (0xc0028c0dc0) (1) Data frame sent I0524 00:54:42.206147 7 log.go:172] (0xc0035ea580) (0xc0028c0dc0) Stream removed, broadcasting: 1 I0524 00:54:42.206169 7 log.go:172] (0xc0035ea580) Go away received I0524 00:54:42.206268 7 log.go:172] (0xc0035ea580) (0xc0028c0dc0) Stream removed, broadcasting: 1 I0524 00:54:42.206285 7 log.go:172] (0xc0035ea580) (0xc00248e500) Stream removed, broadcasting: 3 I0524 00:54:42.206296 7 log.go:172] (0xc0035ea580) (0xc001780c80) Stream removed, broadcasting: 5 May 24 00:54:42.206: INFO: Exec stderr: "" May 24 00:54:42.206: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1397 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:54:42.206: INFO: >>> kubeConfig: /root/.kube/config I0524 00:54:42.237834 7 log.go:172] (0xc002c73550) (0xc0017815e0) Create stream I0524 00:54:42.237865 7 log.go:172] (0xc002c73550) (0xc0017815e0) Stream added, broadcasting: 1 I0524 00:54:42.240549 7 log.go:172] (0xc002c73550) Reply frame received for 1 I0524 00:54:42.240609 7 log.go:172] (0xc002c73550) (0xc001bae1e0) Create stream I0524 00:54:42.240636 7 log.go:172] (0xc002c73550) (0xc001bae1e0) Stream added, broadcasting: 3 I0524 00:54:42.241857 7 log.go:172] (0xc002c73550) Reply frame received for 3 I0524 00:54:42.241913 7 log.go:172] (0xc002c73550) (0xc001bae280) Create stream I0524 00:54:42.241936 7 log.go:172] (0xc002c73550) (0xc001bae280) Stream added, broadcasting: 5 I0524 00:54:42.242944 7 log.go:172] (0xc002c73550) Reply frame received for 5 I0524 00:54:42.312287 7 log.go:172] (0xc002c73550) Data frame received for 5 I0524 00:54:42.312389 7 log.go:172] (0xc001bae280) (5) Data frame handling I0524 00:54:42.313046 7 log.go:172] (0xc002c73550) Data frame received for 3 I0524 00:54:42.313086 7 log.go:172] (0xc001bae1e0) (3) Data frame handling I0524 00:54:42.313368 7 log.go:172] (0xc001bae1e0) (3) Data frame sent I0524 00:54:42.313414 7 log.go:172] (0xc002c73550) Data frame received for 3 I0524 00:54:42.313444 7 log.go:172] (0xc001bae1e0) (3) Data frame handling I0524 00:54:42.319967 7 log.go:172] (0xc002c73550) Data frame received for 1 I0524 00:54:42.319995 7 log.go:172] (0xc0017815e0) (1) Data frame handling I0524 00:54:42.320014 7 log.go:172] (0xc0017815e0) (1) Data frame sent I0524 00:54:42.320066 7 log.go:172] (0xc002c73550) (0xc0017815e0) Stream removed, broadcasting: 1 I0524 00:54:42.320148 7 log.go:172] (0xc002c73550) (0xc0017815e0) Stream removed, broadcasting: 1 I0524 00:54:42.320162 7 log.go:172] (0xc002c73550) (0xc001bae1e0) Stream removed, broadcasting: 3 I0524 00:54:42.320303 7 log.go:172] (0xc002c73550) (0xc001bae280) Stream removed, broadcasting: 5 May 24 00:54:42.320: INFO: Exec stderr: "" May 24 00:54:42.320: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1397 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:54:42.320: INFO: >>> kubeConfig: /root/.kube/config I0524 00:54:42.341890 7 log.go:172] (0xc002c73b80) (0xc001781ae0) Create stream I0524 00:54:42.341912 7 log.go:172] (0xc002c73b80) (0xc001781ae0) Stream added, broadcasting: 1 I0524 00:54:42.343559 7 log.go:172] (0xc002c73b80) Reply frame received for 1 I0524 00:54:42.343587 7 log.go:172] (0xc002c73b80) (0xc001781d60) Create stream I0524 00:54:42.343598 7 log.go:172] (0xc002c73b80) (0xc001781d60) Stream added, broadcasting: 3 I0524 00:54:42.344184 7 log.go:172] (0xc002c73b80) Reply frame received for 3 I0524 00:54:42.344208 7 log.go:172] (0xc002c73b80) (0xc001781e00) Create stream I0524 00:54:42.344225 7 log.go:172] (0xc002c73b80) (0xc001781e00) Stream added, broadcasting: 5 I0524 00:54:42.345429 7 log.go:172] (0xc002c73b80) Reply frame received for 5 I0524 00:54:42.406220 7 log.go:172] (0xc002c73b80) Data frame received for 3 I0524 00:54:42.406248 7 log.go:172] (0xc001781d60) (3) Data frame handling I0524 00:54:42.406270 7 log.go:172] (0xc001781d60) (3) Data frame sent I0524 00:54:42.406286 7 log.go:172] (0xc002c73b80) Data frame received for 3 I0524 00:54:42.406296 7 log.go:172] (0xc001781d60) (3) Data frame handling I0524 00:54:42.406553 7 log.go:172] (0xc002c73b80) Data frame received for 5 I0524 00:54:42.406565 7 log.go:172] (0xc001781e00) (5) Data frame handling I0524 00:54:42.407948 7 log.go:172] (0xc002c73b80) Data frame received for 1 I0524 00:54:42.407975 7 log.go:172] (0xc001781ae0) (1) Data frame handling I0524 00:54:42.407998 7 log.go:172] (0xc001781ae0) (1) Data frame sent I0524 00:54:42.408015 7 log.go:172] (0xc002c73b80) (0xc001781ae0) Stream removed, broadcasting: 1 I0524 00:54:42.408079 7 log.go:172] (0xc002c73b80) Go away received I0524 00:54:42.408157 7 log.go:172] (0xc002c73b80) (0xc001781ae0) Stream removed, broadcasting: 1 I0524 00:54:42.408186 7 log.go:172] (0xc002c73b80) (0xc001781d60) Stream removed, broadcasting: 3 I0524 00:54:42.408202 7 log.go:172] (0xc002c73b80) (0xc001781e00) Stream removed, broadcasting: 5 May 24 00:54:42.408: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 24 00:54:42.408: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1397 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:54:42.408: INFO: >>> kubeConfig: /root/.kube/config I0524 00:54:42.439514 7 log.go:172] (0xc003fce2c0) (0xc000271a40) Create stream I0524 00:54:42.439555 7 log.go:172] (0xc003fce2c0) (0xc000271a40) Stream added, broadcasting: 1 I0524 00:54:42.442066 7 log.go:172] (0xc003fce2c0) Reply frame received for 1 I0524 00:54:42.442099 7 log.go:172] (0xc003fce2c0) (0xc00248e5a0) Create stream I0524 00:54:42.442116 7 log.go:172] (0xc003fce2c0) (0xc00248e5a0) Stream added, broadcasting: 3 I0524 00:54:42.443267 7 log.go:172] (0xc003fce2c0) Reply frame received for 3 I0524 00:54:42.443308 7 log.go:172] (0xc003fce2c0) (0xc001bae5a0) Create stream I0524 00:54:42.443324 7 log.go:172] (0xc003fce2c0) (0xc001bae5a0) Stream added, broadcasting: 5 I0524 00:54:42.444362 7 log.go:172] (0xc003fce2c0) Reply frame received for 5 I0524 00:54:42.517348 7 log.go:172] (0xc003fce2c0) Data frame received for 5 I0524 00:54:42.517403 7 log.go:172] (0xc001bae5a0) (5) Data frame handling I0524 00:54:42.517439 7 log.go:172] (0xc003fce2c0) Data frame received for 3 I0524 00:54:42.517458 7 log.go:172] (0xc00248e5a0) (3) Data frame handling I0524 00:54:42.517479 7 log.go:172] (0xc00248e5a0) (3) Data frame sent I0524 00:54:42.517495 7 log.go:172] (0xc003fce2c0) Data frame received for 3 I0524 00:54:42.517508 7 log.go:172] (0xc00248e5a0) (3) Data frame handling I0524 00:54:42.519133 7 log.go:172] (0xc003fce2c0) Data frame received for 1 I0524 00:54:42.519168 7 log.go:172] (0xc000271a40) (1) Data frame handling I0524 00:54:42.519180 7 log.go:172] (0xc000271a40) (1) Data frame sent I0524 00:54:42.519190 7 log.go:172] (0xc003fce2c0) (0xc000271a40) Stream removed, broadcasting: 1 I0524 00:54:42.519211 7 log.go:172] (0xc003fce2c0) Go away received I0524 00:54:42.519325 7 log.go:172] (0xc003fce2c0) (0xc000271a40) Stream removed, broadcasting: 1 I0524 00:54:42.519358 7 log.go:172] (0xc003fce2c0) (0xc00248e5a0) Stream removed, broadcasting: 3 I0524 00:54:42.519386 7 log.go:172] (0xc003fce2c0) (0xc001bae5a0) Stream removed, broadcasting: 5 May 24 00:54:42.519: INFO: Exec stderr: "" May 24 00:54:42.519: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1397 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:54:42.519: INFO: >>> kubeConfig: /root/.kube/config I0524 00:54:42.553528 7 log.go:172] (0xc0035eac60) (0xc0028c1040) Create stream I0524 00:54:42.553557 7 log.go:172] (0xc0035eac60) (0xc0028c1040) Stream added, broadcasting: 1 I0524 00:54:42.556361 7 log.go:172] (0xc0035eac60) Reply frame received for 1 I0524 00:54:42.556418 7 log.go:172] (0xc0035eac60) (0xc0012779a0) Create stream I0524 00:54:42.556436 7 log.go:172] (0xc0035eac60) (0xc0012779a0) Stream added, broadcasting: 3 I0524 00:54:42.557714 7 log.go:172] (0xc0035eac60) Reply frame received for 3 I0524 00:54:42.557756 7 log.go:172] (0xc0035eac60) (0xc000271d60) Create stream I0524 00:54:42.557769 7 log.go:172] (0xc0035eac60) (0xc000271d60) Stream added, broadcasting: 5 I0524 00:54:42.558716 7 log.go:172] (0xc0035eac60) Reply frame received for 5 I0524 00:54:42.615998 7 log.go:172] (0xc0035eac60) Data frame received for 5 I0524 00:54:42.616023 7 log.go:172] (0xc000271d60) (5) Data frame handling I0524 00:54:42.616045 7 log.go:172] (0xc0035eac60) Data frame received for 3 I0524 00:54:42.616067 7 log.go:172] (0xc0012779a0) (3) Data frame handling I0524 00:54:42.616077 7 log.go:172] (0xc0012779a0) (3) Data frame sent I0524 00:54:42.616083 7 log.go:172] (0xc0035eac60) Data frame received for 3 I0524 00:54:42.616092 7 log.go:172] (0xc0012779a0) (3) Data frame handling I0524 00:54:42.617785 7 log.go:172] (0xc0035eac60) Data frame received for 1 I0524 00:54:42.617810 7 log.go:172] (0xc0028c1040) (1) Data frame handling I0524 00:54:42.617832 7 log.go:172] (0xc0028c1040) (1) Data frame sent I0524 00:54:42.617846 7 log.go:172] (0xc0035eac60) (0xc0028c1040) Stream removed, broadcasting: 1 I0524 00:54:42.617866 7 log.go:172] (0xc0035eac60) Go away received I0524 00:54:42.617975 7 log.go:172] (0xc0035eac60) (0xc0028c1040) Stream removed, broadcasting: 1 I0524 00:54:42.618011 7 log.go:172] (0xc0035eac60) (0xc0012779a0) Stream removed, broadcasting: 3 I0524 00:54:42.618051 7 log.go:172] (0xc0035eac60) (0xc000271d60) Stream removed, broadcasting: 5 May 24 00:54:42.618: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 24 00:54:42.618: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1397 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:54:42.618: INFO: >>> kubeConfig: /root/.kube/config I0524 00:54:42.644085 7 log.go:172] (0xc00291cfd0) (0xc00248e960) Create stream I0524 00:54:42.644120 7 log.go:172] (0xc00291cfd0) (0xc00248e960) Stream added, broadcasting: 1 I0524 00:54:42.646396 7 log.go:172] (0xc00291cfd0) Reply frame received for 1 I0524 00:54:42.646437 7 log.go:172] (0xc00291cfd0) (0xc00248ea00) Create stream I0524 00:54:42.646455 7 log.go:172] (0xc00291cfd0) (0xc00248ea00) Stream added, broadcasting: 3 I0524 00:54:42.647298 7 log.go:172] (0xc00291cfd0) Reply frame received for 3 I0524 00:54:42.647330 7 log.go:172] (0xc00291cfd0) (0xc000271ea0) Create stream I0524 00:54:42.647339 7 log.go:172] (0xc00291cfd0) (0xc000271ea0) Stream added, broadcasting: 5 I0524 00:54:42.648117 7 log.go:172] (0xc00291cfd0) Reply frame received for 5 I0524 00:54:42.710064 7 log.go:172] (0xc00291cfd0) Data frame received for 3 I0524 00:54:42.710107 7 log.go:172] (0xc00248ea00) (3) Data frame handling I0524 00:54:42.710132 7 log.go:172] (0xc00248ea00) (3) Data frame sent I0524 00:54:42.710141 7 log.go:172] (0xc00291cfd0) Data frame received for 3 I0524 00:54:42.710146 7 log.go:172] (0xc00248ea00) (3) Data frame handling I0524 00:54:42.710225 7 log.go:172] (0xc00291cfd0) Data frame received for 5 I0524 00:54:42.710243 7 log.go:172] (0xc000271ea0) (5) Data frame handling I0524 00:54:42.711716 7 log.go:172] (0xc00291cfd0) Data frame received for 1 I0524 00:54:42.711746 7 log.go:172] (0xc00248e960) (1) Data frame handling I0524 00:54:42.711766 7 log.go:172] (0xc00248e960) (1) Data frame sent I0524 00:54:42.711785 7 log.go:172] (0xc00291cfd0) (0xc00248e960) Stream removed, broadcasting: 1 I0524 00:54:42.711808 7 log.go:172] (0xc00291cfd0) Go away received I0524 00:54:42.711872 7 log.go:172] (0xc00291cfd0) (0xc00248e960) Stream removed, broadcasting: 1 I0524 00:54:42.711888 7 log.go:172] (0xc00291cfd0) (0xc00248ea00) Stream removed, broadcasting: 3 I0524 00:54:42.711899 7 log.go:172] (0xc00291cfd0) (0xc000271ea0) Stream removed, broadcasting: 5 May 24 00:54:42.711: INFO: Exec stderr: "" May 24 00:54:42.711: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1397 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:54:42.711: INFO: >>> kubeConfig: /root/.kube/config I0524 00:54:42.736508 7 log.go:172] (0xc003fce630) (0xc001222320) Create stream I0524 00:54:42.736538 7 log.go:172] (0xc003fce630) (0xc001222320) Stream added, broadcasting: 1 I0524 00:54:42.738495 7 log.go:172] (0xc003fce630) Reply frame received for 1 I0524 00:54:42.738530 7 log.go:172] (0xc003fce630) (0xc00248ebe0) Create stream I0524 00:54:42.738548 7 log.go:172] (0xc003fce630) (0xc00248ebe0) Stream added, broadcasting: 3 I0524 00:54:42.739433 7 log.go:172] (0xc003fce630) Reply frame received for 3 I0524 00:54:42.739461 7 log.go:172] (0xc003fce630) (0xc001277a40) Create stream I0524 00:54:42.739472 7 log.go:172] (0xc003fce630) (0xc001277a40) Stream added, broadcasting: 5 I0524 00:54:42.740216 7 log.go:172] (0xc003fce630) Reply frame received for 5 I0524 00:54:42.812581 7 log.go:172] (0xc003fce630) Data frame received for 3 I0524 00:54:42.812617 7 log.go:172] (0xc00248ebe0) (3) Data frame handling I0524 00:54:42.812626 7 log.go:172] (0xc00248ebe0) (3) Data frame sent I0524 00:54:42.812632 7 log.go:172] (0xc003fce630) Data frame received for 3 I0524 00:54:42.812637 7 log.go:172] (0xc00248ebe0) (3) Data frame handling I0524 00:54:42.812660 7 log.go:172] (0xc003fce630) Data frame received for 5 I0524 00:54:42.812700 7 log.go:172] (0xc001277a40) (5) Data frame handling I0524 00:54:42.814202 7 log.go:172] (0xc003fce630) Data frame received for 1 I0524 00:54:42.814230 7 log.go:172] (0xc001222320) (1) Data frame handling I0524 00:54:42.814244 7 log.go:172] (0xc001222320) (1) Data frame sent I0524 00:54:42.814258 7 log.go:172] (0xc003fce630) (0xc001222320) Stream removed, broadcasting: 1 I0524 00:54:42.814271 7 log.go:172] (0xc003fce630) Go away received I0524 00:54:42.814488 7 log.go:172] (0xc003fce630) (0xc001222320) Stream removed, broadcasting: 1 I0524 00:54:42.814504 7 log.go:172] (0xc003fce630) (0xc00248ebe0) Stream removed, broadcasting: 3 I0524 00:54:42.814512 7 log.go:172] (0xc003fce630) (0xc001277a40) Stream removed, broadcasting: 5 May 24 00:54:42.814: INFO: Exec stderr: "" May 24 00:54:42.814: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1397 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:54:42.814: INFO: >>> kubeConfig: /root/.kube/config I0524 00:54:42.841447 7 log.go:172] (0xc002c2c6e0) (0xc0015dc000) Create stream I0524 00:54:42.841472 7 log.go:172] (0xc002c2c6e0) (0xc0015dc000) Stream added, broadcasting: 1 I0524 00:54:42.843441 7 log.go:172] (0xc002c2c6e0) Reply frame received for 1 I0524 00:54:42.843478 7 log.go:172] (0xc002c2c6e0) (0xc001bae6e0) Create stream I0524 00:54:42.843493 7 log.go:172] (0xc002c2c6e0) (0xc001bae6e0) Stream added, broadcasting: 3 I0524 00:54:42.844460 7 log.go:172] (0xc002c2c6e0) Reply frame received for 3 I0524 00:54:42.844488 7 log.go:172] (0xc002c2c6e0) (0xc0012223c0) Create stream I0524 00:54:42.844497 7 log.go:172] (0xc002c2c6e0) (0xc0012223c0) Stream added, broadcasting: 5 I0524 00:54:42.845655 7 log.go:172] (0xc002c2c6e0) Reply frame received for 5 I0524 00:54:42.921922 7 log.go:172] (0xc002c2c6e0) Data frame received for 5 I0524 00:54:42.922028 7 log.go:172] (0xc0012223c0) (5) Data frame handling I0524 00:54:42.922067 7 log.go:172] (0xc002c2c6e0) Data frame received for 3 I0524 00:54:42.922088 7 log.go:172] (0xc001bae6e0) (3) Data frame handling I0524 00:54:42.922109 7 log.go:172] (0xc001bae6e0) (3) Data frame sent I0524 00:54:42.922126 7 log.go:172] (0xc002c2c6e0) Data frame received for 3 I0524 00:54:42.922139 7 log.go:172] (0xc001bae6e0) (3) Data frame handling I0524 00:54:42.923406 7 log.go:172] (0xc002c2c6e0) Data frame received for 1 I0524 00:54:42.923439 7 log.go:172] (0xc0015dc000) (1) Data frame handling I0524 00:54:42.923453 7 log.go:172] (0xc0015dc000) (1) Data frame sent I0524 00:54:42.923469 7 log.go:172] (0xc002c2c6e0) (0xc0015dc000) Stream removed, broadcasting: 1 I0524 00:54:42.923492 7 log.go:172] (0xc002c2c6e0) Go away received I0524 00:54:42.923579 7 log.go:172] (0xc002c2c6e0) (0xc0015dc000) Stream removed, broadcasting: 1 I0524 00:54:42.923594 7 log.go:172] (0xc002c2c6e0) (0xc001bae6e0) Stream removed, broadcasting: 3 I0524 00:54:42.923603 7 log.go:172] (0xc002c2c6e0) (0xc0012223c0) Stream removed, broadcasting: 5 May 24 00:54:42.923: INFO: Exec stderr: "" May 24 00:54:42.923: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1397 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:54:42.923: INFO: >>> kubeConfig: /root/.kube/config I0524 00:54:42.951290 7 log.go:172] (0xc002c2cf20) (0xc0015dc460) Create stream I0524 00:54:42.951330 7 log.go:172] (0xc002c2cf20) (0xc0015dc460) Stream added, broadcasting: 1 I0524 00:54:42.954319 7 log.go:172] (0xc002c2cf20) Reply frame received for 1 I0524 00:54:42.954354 7 log.go:172] (0xc002c2cf20) (0xc0015dc5a0) Create stream I0524 00:54:42.954365 7 log.go:172] (0xc002c2cf20) (0xc0015dc5a0) Stream added, broadcasting: 3 I0524 00:54:42.955240 7 log.go:172] (0xc002c2cf20) Reply frame received for 3 I0524 00:54:42.955278 7 log.go:172] (0xc002c2cf20) (0xc001222460) Create stream I0524 00:54:42.955290 7 log.go:172] (0xc002c2cf20) (0xc001222460) Stream added, broadcasting: 5 I0524 00:54:42.956152 7 log.go:172] (0xc002c2cf20) Reply frame received for 5 I0524 00:54:43.031859 7 log.go:172] (0xc002c2cf20) Data frame received for 3 I0524 00:54:43.031899 7 log.go:172] (0xc0015dc5a0) (3) Data frame handling I0524 00:54:43.031934 7 log.go:172] (0xc0015dc5a0) (3) Data frame sent I0524 00:54:43.031958 7 log.go:172] (0xc002c2cf20) Data frame received for 3 I0524 00:54:43.031975 7 log.go:172] (0xc0015dc5a0) (3) Data frame handling I0524 00:54:43.032285 7 log.go:172] (0xc002c2cf20) Data frame received for 5 I0524 00:54:43.032320 7 log.go:172] (0xc001222460) (5) Data frame handling I0524 00:54:43.034207 7 log.go:172] (0xc002c2cf20) Data frame received for 1 I0524 00:54:43.034249 7 log.go:172] (0xc0015dc460) (1) Data frame handling I0524 00:54:43.034295 7 log.go:172] (0xc0015dc460) (1) Data frame sent I0524 00:54:43.034322 7 log.go:172] (0xc002c2cf20) (0xc0015dc460) Stream removed, broadcasting: 1 I0524 00:54:43.034359 7 log.go:172] (0xc002c2cf20) Go away received I0524 00:54:43.034491 7 log.go:172] (0xc002c2cf20) (0xc0015dc460) Stream removed, broadcasting: 1 I0524 00:54:43.034513 7 log.go:172] (0xc002c2cf20) (0xc0015dc5a0) Stream removed, broadcasting: 3 I0524 00:54:43.034531 7 log.go:172] (0xc002c2cf20) (0xc001222460) Stream removed, broadcasting: 5 May 24 00:54:43.034: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:54:43.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1397" for this suite. • [SLOW TEST:13.194 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":246,"skipped":4119,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:54:43.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2960 STEP: creating a selector STEP: Creating the service pods in kubernetes May 24 00:54:43.167: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 24 00:54:43.267: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 24 00:54:45.272: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 24 00:54:47.291: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 00:54:49.271: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 00:54:51.271: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 00:54:53.272: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 00:54:55.271: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 00:54:57.271: INFO: The status of Pod netserver-0 is Running (Ready = false) May 24 00:54:59.272: INFO: The status of Pod netserver-0 is Running (Ready = true) May 24 00:54:59.277: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 24 00:55:03.349: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.236 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2960 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:55:03.349: INFO: >>> kubeConfig: /root/.kube/config I0524 00:55:03.381664 7 log.go:172] (0xc002ace2c0) (0xc001e7c640) Create stream I0524 00:55:03.381689 7 log.go:172] (0xc002ace2c0) (0xc001e7c640) Stream added, broadcasting: 1 I0524 00:55:03.384177 7 log.go:172] (0xc002ace2c0) Reply frame received for 1 I0524 00:55:03.384219 7 log.go:172] (0xc002ace2c0) (0xc0015dcc80) Create stream I0524 00:55:03.384236 7 log.go:172] (0xc002ace2c0) (0xc0015dcc80) Stream added, broadcasting: 3 I0524 00:55:03.385099 7 log.go:172] (0xc002ace2c0) Reply frame received for 3 I0524 00:55:03.385306 7 log.go:172] (0xc002ace2c0) (0xc001223e00) Create stream I0524 00:55:03.385324 7 log.go:172] (0xc002ace2c0) (0xc001223e00) Stream added, broadcasting: 5 I0524 00:55:03.386194 7 log.go:172] (0xc002ace2c0) Reply frame received for 5 I0524 00:55:04.492157 7 log.go:172] (0xc002ace2c0) Data frame received for 3 I0524 00:55:04.492190 7 log.go:172] (0xc0015dcc80) (3) Data frame handling I0524 00:55:04.492201 7 log.go:172] (0xc0015dcc80) (3) Data frame sent I0524 00:55:04.492250 7 log.go:172] (0xc002ace2c0) Data frame received for 5 I0524 00:55:04.492272 7 log.go:172] (0xc001223e00) (5) Data frame handling I0524 00:55:04.492492 7 log.go:172] (0xc002ace2c0) Data frame received for 3 I0524 00:55:04.492502 7 log.go:172] (0xc0015dcc80) (3) Data frame handling I0524 00:55:04.494721 7 log.go:172] (0xc002ace2c0) Data frame received for 1 I0524 00:55:04.494737 7 log.go:172] (0xc001e7c640) (1) Data frame handling I0524 00:55:04.494748 7 log.go:172] (0xc001e7c640) (1) Data frame sent I0524 00:55:04.494758 7 log.go:172] (0xc002ace2c0) (0xc001e7c640) Stream removed, broadcasting: 1 I0524 00:55:04.494793 7 log.go:172] (0xc002ace2c0) Go away received I0524 00:55:04.494854 7 log.go:172] (0xc002ace2c0) (0xc001e7c640) Stream removed, broadcasting: 1 I0524 00:55:04.494881 7 log.go:172] (0xc002ace2c0) (0xc0015dcc80) Stream removed, broadcasting: 3 I0524 00:55:04.494903 7 log.go:172] (0xc002ace2c0) (0xc001223e00) Stream removed, broadcasting: 5 May 24 00:55:04.494: INFO: Found all expected endpoints: [netserver-0] May 24 00:55:04.498: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.232 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2960 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 24 00:55:04.498: INFO: >>> kubeConfig: /root/.kube/config I0524 00:55:04.529752 7 log.go:172] (0xc003fcebb0) (0xc002a0a140) Create stream I0524 00:55:04.529785 7 log.go:172] (0xc003fcebb0) (0xc002a0a140) Stream added, broadcasting: 1 I0524 00:55:04.531888 7 log.go:172] (0xc003fcebb0) Reply frame received for 1 I0524 00:55:04.531923 7 log.go:172] (0xc003fcebb0) (0xc002932280) Create stream I0524 00:55:04.531932 7 log.go:172] (0xc003fcebb0) (0xc002932280) Stream added, broadcasting: 3 I0524 00:55:04.532931 7 log.go:172] (0xc003fcebb0) Reply frame received for 3 I0524 00:55:04.532960 7 log.go:172] (0xc003fcebb0) (0xc0015dcdc0) Create stream I0524 00:55:04.532973 7 log.go:172] (0xc003fcebb0) (0xc0015dcdc0) Stream added, broadcasting: 5 I0524 00:55:04.534051 7 log.go:172] (0xc003fcebb0) Reply frame received for 5 I0524 00:55:05.621537 7 log.go:172] (0xc003fcebb0) Data frame received for 5 I0524 00:55:05.621582 7 log.go:172] (0xc0015dcdc0) (5) Data frame handling I0524 00:55:05.621646 7 log.go:172] (0xc003fcebb0) Data frame received for 3 I0524 00:55:05.621672 7 log.go:172] (0xc002932280) (3) Data frame handling I0524 00:55:05.621693 7 log.go:172] (0xc002932280) (3) Data frame sent I0524 00:55:05.621706 7 log.go:172] (0xc003fcebb0) Data frame received for 3 I0524 00:55:05.621722 7 log.go:172] (0xc002932280) (3) Data frame handling I0524 00:55:05.623966 7 log.go:172] (0xc003fcebb0) Data frame received for 1 I0524 00:55:05.624031 7 log.go:172] (0xc002a0a140) (1) Data frame handling I0524 00:55:05.624087 7 log.go:172] (0xc002a0a140) (1) Data frame sent I0524 00:55:05.624120 7 log.go:172] (0xc003fcebb0) (0xc002a0a140) Stream removed, broadcasting: 1 I0524 00:55:05.624252 7 log.go:172] (0xc003fcebb0) (0xc002a0a140) Stream removed, broadcasting: 1 I0524 00:55:05.624276 7 log.go:172] (0xc003fcebb0) (0xc002932280) Stream removed, broadcasting: 3 I0524 00:55:05.624295 7 log.go:172] (0xc003fcebb0) (0xc0015dcdc0) Stream removed, broadcasting: 5 May 24 00:55:05.624: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 I0524 00:55:05.624410 7 log.go:172] (0xc003fcebb0) Go away received May 24 00:55:05.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2960" for this suite. • [SLOW TEST:22.587 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":247,"skipped":4139,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:55:05.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:55:05.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3346" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":288,"completed":248,"skipped":4149,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:55:05.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:251 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 STEP: creating an pod May 24 00:55:05.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.13 --namespace=kubectl-2815 -- logs-generator --log-lines-total 100 --run-duration 20s' May 24 00:55:05.903: INFO: stderr: "" May 24 00:55:05.903: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. May 24 00:55:05.903: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 24 00:55:05.903: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2815" to be "running and ready, or succeeded" May 24 00:55:05.936: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 32.657558ms May 24 00:55:07.940: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036960012s May 24 00:55:09.943: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.039705332s May 24 00:55:09.943: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 24 00:55:09.943: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 24 00:55:09.943: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2815' May 24 00:55:10.054: INFO: stderr: "" May 24 00:55:10.054: INFO: stdout: "I0524 00:55:08.576804 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/l9b 233\nI0524 00:55:08.776931 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/59p 260\nI0524 00:55:08.976974 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/mlt4 458\nI0524 00:55:09.176960 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/mr8 404\nI0524 00:55:09.376951 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/r9hw 292\nI0524 00:55:09.577006 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/rgl 423\nI0524 00:55:09.776988 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/j5pp 304\nI0524 00:55:09.976956 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/ckw 420\n" STEP: limiting log lines May 24 00:55:10.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2815 --tail=1' May 24 00:55:10.159: INFO: stderr: "" May 24 00:55:10.159: INFO: stdout: "I0524 00:55:09.976956 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/ckw 420\n" May 24 00:55:10.159: INFO: got output "I0524 00:55:09.976956 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/ckw 420\n" STEP: limiting log bytes May 24 00:55:10.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2815 --limit-bytes=1' May 24 00:55:10.272: INFO: stderr: "" May 24 00:55:10.272: INFO: stdout: "I" May 24 00:55:10.272: INFO: got output "I" STEP: exposing timestamps May 24 00:55:10.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2815 --tail=1 --timestamps' May 24 00:55:10.386: INFO: stderr: "" May 24 00:55:10.386: INFO: stdout: "2020-05-24T00:55:10.37740897Z I0524 00:55:10.377016 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/mnb 506\n" May 24 00:55:10.386: INFO: got output "2020-05-24T00:55:10.37740897Z I0524 00:55:10.377016 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/mnb 506\n" STEP: restricting to a time range May 24 00:55:12.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2815 --since=1s' May 24 00:55:13.018: INFO: stderr: "" May 24 00:55:13.018: INFO: stdout: "I0524 00:55:12.176930 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/lpq2 425\nI0524 00:55:12.376987 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/nb2 445\nI0524 00:55:12.576935 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/nm5 307\nI0524 00:55:12.776974 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/mbfb 245\nI0524 00:55:12.976899 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/pl5z 245\n" May 24 00:55:13.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2815 --since=24h' May 24 00:55:13.131: INFO: stderr: "" May 24 00:55:13.131: INFO: stdout: "I0524 00:55:08.576804 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/l9b 233\nI0524 00:55:08.776931 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/59p 260\nI0524 00:55:08.976974 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/mlt4 458\nI0524 00:55:09.176960 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/mr8 404\nI0524 00:55:09.376951 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/r9hw 292\nI0524 00:55:09.577006 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/rgl 423\nI0524 00:55:09.776988 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/j5pp 304\nI0524 00:55:09.976956 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/ckw 420\nI0524 00:55:10.176969 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/fk7 346\nI0524 00:55:10.377016 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/mnb 506\nI0524 00:55:10.576960 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/z6dm 531\nI0524 00:55:10.777003 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/rb55 402\nI0524 00:55:10.976952 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/xxj6 285\nI0524 00:55:11.176979 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/8bp 347\nI0524 00:55:11.376935 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/r5n 501\nI0524 00:55:11.576962 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/q6cf 560\nI0524 00:55:11.776961 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/v65v 351\nI0524 00:55:11.976946 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/fmj 232\nI0524 00:55:12.176930 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/lpq2 425\nI0524 00:55:12.376987 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/nb2 445\nI0524 00:55:12.576935 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/nm5 307\nI0524 00:55:12.776974 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/mbfb 245\nI0524 00:55:12.976899 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/pl5z 245\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1399 May 24 00:55:13.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2815' May 24 00:55:25.257: INFO: stderr: "" May 24 00:55:25.257: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:55:25.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2815" for this suite. • [SLOW TEST:19.574 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":288,"completed":249,"skipped":4166,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:55:25.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:55:42.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1806" for this suite. • [SLOW TEST:17.200 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":288,"completed":250,"skipped":4179,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:55:42.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-55923397-f6e4-4114-a75e-91c35d10774c STEP: Creating a pod to test consume secrets May 24 00:55:42.572: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ae9a9380-b720-48d9-beed-9777d3cf19a3" in namespace "projected-6866" to be "Succeeded or Failed" May 24 00:55:42.590: INFO: Pod "pod-projected-secrets-ae9a9380-b720-48d9-beed-9777d3cf19a3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.015662ms May 24 00:55:44.651: INFO: Pod "pod-projected-secrets-ae9a9380-b720-48d9-beed-9777d3cf19a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07921126s May 24 00:55:46.675: INFO: Pod "pod-projected-secrets-ae9a9380-b720-48d9-beed-9777d3cf19a3": Phase="Running", Reason="", readiness=true. Elapsed: 4.102582314s May 24 00:55:48.679: INFO: Pod "pod-projected-secrets-ae9a9380-b720-48d9-beed-9777d3cf19a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107406253s STEP: Saw pod success May 24 00:55:48.680: INFO: Pod "pod-projected-secrets-ae9a9380-b720-48d9-beed-9777d3cf19a3" satisfied condition "Succeeded or Failed" May 24 00:55:48.682: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-ae9a9380-b720-48d9-beed-9777d3cf19a3 container projected-secret-volume-test: STEP: delete the pod May 24 00:55:48.723: INFO: Waiting for pod pod-projected-secrets-ae9a9380-b720-48d9-beed-9777d3cf19a3 to disappear May 24 00:55:48.770: INFO: Pod pod-projected-secrets-ae9a9380-b720-48d9-beed-9777d3cf19a3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:55:48.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6866" for this suite. • [SLOW TEST:6.302 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":251,"skipped":4196,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:55:48.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-232 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet May 24 00:55:48.914: INFO: Found 0 stateful pods, waiting for 3 May 24 00:55:58.920: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 00:55:58.920: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 00:55:58.920: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 24 00:56:08.920: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 24 00:56:08.920: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 24 00:56:08.920: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 24 00:56:08.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-232 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 00:56:09.236: INFO: stderr: "I0524 00:56:09.098656 3445 log.go:172] (0xc0000e9a20) (0xc00047f900) Create stream\nI0524 00:56:09.098722 3445 log.go:172] (0xc0000e9a20) (0xc00047f900) Stream added, broadcasting: 1\nI0524 00:56:09.103033 3445 log.go:172] (0xc0000e9a20) Reply frame received for 1\nI0524 00:56:09.103078 3445 log.go:172] (0xc0000e9a20) (0xc000686e60) Create stream\nI0524 00:56:09.103094 3445 log.go:172] (0xc0000e9a20) (0xc000686e60) Stream added, broadcasting: 3\nI0524 00:56:09.104099 3445 log.go:172] (0xc0000e9a20) Reply frame received for 3\nI0524 00:56:09.104142 3445 log.go:172] (0xc0000e9a20) (0xc000598c80) Create stream\nI0524 00:56:09.104159 3445 log.go:172] (0xc0000e9a20) (0xc000598c80) Stream added, broadcasting: 5\nI0524 00:56:09.105047 3445 log.go:172] (0xc0000e9a20) Reply frame received for 5\nI0524 00:56:09.195935 3445 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0524 00:56:09.195969 3445 log.go:172] (0xc000598c80) (5) Data frame handling\nI0524 00:56:09.195991 3445 log.go:172] (0xc000598c80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 00:56:09.232151 3445 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0524 00:56:09.232184 3445 log.go:172] (0xc000686e60) (3) Data frame handling\nI0524 00:56:09.232191 3445 log.go:172] (0xc000686e60) (3) Data frame sent\nI0524 00:56:09.232205 3445 log.go:172] (0xc0000e9a20) Data frame received for 5\nI0524 00:56:09.232210 3445 log.go:172] (0xc000598c80) (5) Data frame handling\nI0524 00:56:09.232655 3445 log.go:172] (0xc0000e9a20) Data frame received for 3\nI0524 00:56:09.232668 3445 log.go:172] (0xc000686e60) (3) Data frame handling\nI0524 00:56:09.233840 3445 log.go:172] (0xc0000e9a20) Data frame received for 1\nI0524 00:56:09.233853 3445 log.go:172] (0xc00047f900) (1) Data frame handling\nI0524 00:56:09.233860 3445 log.go:172] (0xc00047f900) (1) Data frame sent\nI0524 00:56:09.233871 3445 log.go:172] (0xc0000e9a20) (0xc00047f900) Stream removed, broadcasting: 1\nI0524 00:56:09.233909 3445 log.go:172] (0xc0000e9a20) Go away received\nI0524 00:56:09.234101 3445 log.go:172] (0xc0000e9a20) (0xc00047f900) Stream removed, broadcasting: 1\nI0524 00:56:09.234114 3445 log.go:172] (0xc0000e9a20) (0xc000686e60) Stream removed, broadcasting: 3\nI0524 00:56:09.234122 3445 log.go:172] (0xc0000e9a20) (0xc000598c80) Stream removed, broadcasting: 5\n" May 24 00:56:09.236: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 00:56:09.236: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 24 00:56:19.267: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 24 00:56:29.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-232 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 00:56:29.572: INFO: stderr: "I0524 00:56:29.461250 3467 log.go:172] (0xc0009d13f0) (0xc00083dcc0) Create stream\nI0524 00:56:29.461320 3467 log.go:172] (0xc0009d13f0) (0xc00083dcc0) Stream added, broadcasting: 1\nI0524 00:56:29.473739 3467 log.go:172] (0xc0009d13f0) Reply frame received for 1\nI0524 00:56:29.473793 3467 log.go:172] (0xc0009d13f0) (0xc0008466e0) Create stream\nI0524 00:56:29.473808 3467 log.go:172] (0xc0009d13f0) (0xc0008466e0) Stream added, broadcasting: 3\nI0524 00:56:29.475043 3467 log.go:172] (0xc0009d13f0) Reply frame received for 3\nI0524 00:56:29.475086 3467 log.go:172] (0xc0009d13f0) (0xc0007c1d60) Create stream\nI0524 00:56:29.475101 3467 log.go:172] (0xc0009d13f0) (0xc0007c1d60) Stream added, broadcasting: 5\nI0524 00:56:29.477624 3467 log.go:172] (0xc0009d13f0) Reply frame received for 5\nI0524 00:56:29.564693 3467 log.go:172] (0xc0009d13f0) Data frame received for 3\nI0524 00:56:29.564724 3467 log.go:172] (0xc0008466e0) (3) Data frame handling\nI0524 00:56:29.564738 3467 log.go:172] (0xc0008466e0) (3) Data frame sent\nI0524 00:56:29.564748 3467 log.go:172] (0xc0009d13f0) Data frame received for 3\nI0524 00:56:29.564757 3467 log.go:172] (0xc0008466e0) (3) Data frame handling\nI0524 00:56:29.564799 3467 log.go:172] (0xc0009d13f0) Data frame received for 5\nI0524 00:56:29.564810 3467 log.go:172] (0xc0007c1d60) (5) Data frame handling\nI0524 00:56:29.564821 3467 log.go:172] (0xc0007c1d60) (5) Data frame sent\nI0524 00:56:29.564838 3467 log.go:172] (0xc0009d13f0) Data frame received for 5\nI0524 00:56:29.564847 3467 log.go:172] (0xc0007c1d60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 00:56:29.566376 3467 log.go:172] (0xc0009d13f0) Data frame received for 1\nI0524 00:56:29.566406 3467 log.go:172] (0xc00083dcc0) (1) Data frame handling\nI0524 00:56:29.566434 3467 log.go:172] (0xc00083dcc0) (1) Data frame sent\nI0524 00:56:29.566452 3467 log.go:172] (0xc0009d13f0) (0xc00083dcc0) Stream removed, broadcasting: 1\nI0524 00:56:29.566467 3467 log.go:172] (0xc0009d13f0) Go away received\nI0524 00:56:29.566969 3467 log.go:172] (0xc0009d13f0) (0xc00083dcc0) Stream removed, broadcasting: 1\nI0524 00:56:29.566993 3467 log.go:172] (0xc0009d13f0) (0xc0008466e0) Stream removed, broadcasting: 3\nI0524 00:56:29.567008 3467 log.go:172] (0xc0009d13f0) (0xc0007c1d60) Stream removed, broadcasting: 5\n" May 24 00:56:29.572: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 00:56:29.572: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 00:56:49.596: INFO: Waiting for StatefulSet statefulset-232/ss2 to complete update May 24 00:56:49.596: INFO: Waiting for Pod statefulset-232/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 24 00:56:59.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-232 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 24 00:56:59.849: INFO: stderr: "I0524 00:56:59.734963 3487 log.go:172] (0xc000ad7550) (0xc000c50140) Create stream\nI0524 00:56:59.735021 3487 log.go:172] (0xc000ad7550) (0xc000c50140) Stream added, broadcasting: 1\nI0524 00:56:59.738895 3487 log.go:172] (0xc000ad7550) Reply frame received for 1\nI0524 00:56:59.738934 3487 log.go:172] (0xc000ad7550) (0xc000712f00) Create stream\nI0524 00:56:59.738950 3487 log.go:172] (0xc000ad7550) (0xc000712f00) Stream added, broadcasting: 3\nI0524 00:56:59.739770 3487 log.go:172] (0xc000ad7550) Reply frame received for 3\nI0524 00:56:59.739808 3487 log.go:172] (0xc000ad7550) (0xc00070a640) Create stream\nI0524 00:56:59.739819 3487 log.go:172] (0xc000ad7550) (0xc00070a640) Stream added, broadcasting: 5\nI0524 00:56:59.740514 3487 log.go:172] (0xc000ad7550) Reply frame received for 5\nI0524 00:56:59.813398 3487 log.go:172] (0xc000ad7550) Data frame received for 5\nI0524 00:56:59.813434 3487 log.go:172] (0xc00070a640) (5) Data frame handling\nI0524 00:56:59.813460 3487 log.go:172] (0xc00070a640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0524 00:56:59.840755 3487 log.go:172] (0xc000ad7550) Data frame received for 3\nI0524 00:56:59.840789 3487 log.go:172] (0xc000712f00) (3) Data frame handling\nI0524 00:56:59.840804 3487 log.go:172] (0xc000712f00) (3) Data frame sent\nI0524 00:56:59.840896 3487 log.go:172] (0xc000ad7550) Data frame received for 5\nI0524 00:56:59.840929 3487 log.go:172] (0xc00070a640) (5) Data frame handling\nI0524 00:56:59.840965 3487 log.go:172] (0xc000ad7550) Data frame received for 3\nI0524 00:56:59.840980 3487 log.go:172] (0xc000712f00) (3) Data frame handling\nI0524 00:56:59.843382 3487 log.go:172] (0xc000ad7550) Data frame received for 1\nI0524 00:56:59.843406 3487 log.go:172] (0xc000c50140) (1) Data frame handling\nI0524 00:56:59.843416 3487 log.go:172] (0xc000c50140) (1) Data frame sent\nI0524 00:56:59.843432 3487 log.go:172] (0xc000ad7550) (0xc000c50140) Stream removed, broadcasting: 1\nI0524 00:56:59.843446 3487 log.go:172] (0xc000ad7550) Go away received\nI0524 00:56:59.844028 3487 log.go:172] (0xc000ad7550) (0xc000c50140) Stream removed, broadcasting: 1\nI0524 00:56:59.844074 3487 log.go:172] (0xc000ad7550) (0xc000712f00) Stream removed, broadcasting: 3\nI0524 00:56:59.844090 3487 log.go:172] (0xc000ad7550) (0xc00070a640) Stream removed, broadcasting: 5\n" May 24 00:56:59.849: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 24 00:56:59.849: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 24 00:57:09.881: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 24 00:57:19.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=statefulset-232 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 24 00:57:20.225: INFO: stderr: "I0524 00:57:20.118977 3507 log.go:172] (0xc0009bd810) (0xc000b026e0) Create stream\nI0524 00:57:20.119028 3507 log.go:172] (0xc0009bd810) (0xc000b026e0) Stream added, broadcasting: 1\nI0524 00:57:20.122225 3507 log.go:172] (0xc0009bd810) Reply frame received for 1\nI0524 00:57:20.122342 3507 log.go:172] (0xc0009bd810) (0xc00062adc0) Create stream\nI0524 00:57:20.122380 3507 log.go:172] (0xc0009bd810) (0xc00062adc0) Stream added, broadcasting: 3\nI0524 00:57:20.123480 3507 log.go:172] (0xc0009bd810) Reply frame received for 3\nI0524 00:57:20.123531 3507 log.go:172] (0xc0009bd810) (0xc000b02780) Create stream\nI0524 00:57:20.123551 3507 log.go:172] (0xc0009bd810) (0xc000b02780) Stream added, broadcasting: 5\nI0524 00:57:20.124540 3507 log.go:172] (0xc0009bd810) Reply frame received for 5\nI0524 00:57:20.218369 3507 log.go:172] (0xc0009bd810) Data frame received for 5\nI0524 00:57:20.218433 3507 log.go:172] (0xc000b02780) (5) Data frame handling\nI0524 00:57:20.218445 3507 log.go:172] (0xc000b02780) (5) Data frame sent\nI0524 00:57:20.218454 3507 log.go:172] (0xc0009bd810) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0524 00:57:20.218490 3507 log.go:172] (0xc0009bd810) Data frame received for 3\nI0524 00:57:20.218547 3507 log.go:172] (0xc00062adc0) (3) Data frame handling\nI0524 00:57:20.218573 3507 log.go:172] (0xc00062adc0) (3) Data frame sent\nI0524 00:57:20.218707 3507 log.go:172] (0xc0009bd810) Data frame received for 3\nI0524 00:57:20.218738 3507 log.go:172] (0xc00062adc0) (3) Data frame handling\nI0524 00:57:20.218772 3507 log.go:172] (0xc000b02780) (5) Data frame handling\nI0524 00:57:20.220007 3507 log.go:172] (0xc0009bd810) Data frame received for 1\nI0524 00:57:20.220030 3507 log.go:172] (0xc000b026e0) (1) Data frame handling\nI0524 00:57:20.220045 3507 log.go:172] (0xc000b026e0) (1) Data frame sent\nI0524 00:57:20.220076 3507 log.go:172] (0xc0009bd810) (0xc000b026e0) Stream removed, broadcasting: 1\nI0524 00:57:20.220107 3507 log.go:172] (0xc0009bd810) Go away received\nI0524 00:57:20.220499 3507 log.go:172] (0xc0009bd810) (0xc000b026e0) Stream removed, broadcasting: 1\nI0524 00:57:20.220523 3507 log.go:172] (0xc0009bd810) (0xc00062adc0) Stream removed, broadcasting: 3\nI0524 00:57:20.220536 3507 log.go:172] (0xc0009bd810) (0xc000b02780) Stream removed, broadcasting: 5\n" May 24 00:57:20.225: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 24 00:57:20.225: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 24 00:57:40.267: INFO: Waiting for StatefulSet statefulset-232/ss2 to complete update May 24 00:57:40.267: INFO: Waiting for Pod statefulset-232/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 24 00:57:50.277: INFO: Deleting all statefulset in ns statefulset-232 May 24 00:57:50.280: INFO: Scaling statefulset ss2 to 0 May 24 00:58:00.301: INFO: Waiting for statefulset status.replicas updated to 0 May 24 00:58:00.304: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:58:00.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-232" for this suite. • [SLOW TEST:131.544 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":288,"completed":252,"skipped":4207,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:58:00.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-a1205b91-8313-44c3-ab45-0f889d3c6bdc STEP: Creating a pod to test consume configMaps May 24 00:58:00.415: INFO: Waiting up to 5m0s for pod "pod-configmaps-8b49a166-3268-41ee-8cde-36b898de992a" in namespace "configmap-9176" to be "Succeeded or Failed" May 24 00:58:00.433: INFO: Pod "pod-configmaps-8b49a166-3268-41ee-8cde-36b898de992a": Phase="Pending", Reason="", readiness=false. Elapsed: 18.011976ms May 24 00:58:02.460: INFO: Pod "pod-configmaps-8b49a166-3268-41ee-8cde-36b898de992a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045494205s May 24 00:58:04.465: INFO: Pod "pod-configmaps-8b49a166-3268-41ee-8cde-36b898de992a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049977956s STEP: Saw pod success May 24 00:58:04.465: INFO: Pod "pod-configmaps-8b49a166-3268-41ee-8cde-36b898de992a" satisfied condition "Succeeded or Failed" May 24 00:58:04.468: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8b49a166-3268-41ee-8cde-36b898de992a container configmap-volume-test: STEP: delete the pod May 24 00:58:04.580: INFO: Waiting for pod pod-configmaps-8b49a166-3268-41ee-8cde-36b898de992a to disappear May 24 00:58:04.597: INFO: Pod pod-configmaps-8b49a166-3268-41ee-8cde-36b898de992a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:58:04.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9176" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":253,"skipped":4207,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:58:04.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5242 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5242 STEP: creating replication controller externalsvc in namespace services-5242 I0524 00:58:04.921768 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5242, replica count: 2 I0524 00:58:07.972207 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 00:58:10.972483 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 24 00:58:11.066: INFO: Creating new exec pod May 24 00:58:15.108: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-5242 execpod6wrvk -- /bin/sh -x -c nslookup nodeport-service' May 24 00:58:15.435: INFO: stderr: "I0524 00:58:15.230067 3525 log.go:172] (0xc00096ce70) (0xc0009b6460) Create stream\nI0524 00:58:15.230121 3525 log.go:172] (0xc00096ce70) (0xc0009b6460) Stream added, broadcasting: 1\nI0524 00:58:15.233781 3525 log.go:172] (0xc00096ce70) Reply frame received for 1\nI0524 00:58:15.233812 3525 log.go:172] (0xc00096ce70) (0xc0009b6500) Create stream\nI0524 00:58:15.233834 3525 log.go:172] (0xc00096ce70) (0xc0009b6500) Stream added, broadcasting: 3\nI0524 00:58:15.234746 3525 log.go:172] (0xc00096ce70) Reply frame received for 3\nI0524 00:58:15.234776 3525 log.go:172] (0xc00096ce70) (0xc0009b65a0) Create stream\nI0524 00:58:15.234798 3525 log.go:172] (0xc00096ce70) (0xc0009b65a0) Stream added, broadcasting: 5\nI0524 00:58:15.235716 3525 log.go:172] (0xc00096ce70) Reply frame received for 5\nI0524 00:58:15.300406 3525 log.go:172] (0xc00096ce70) Data frame received for 5\nI0524 00:58:15.300431 3525 log.go:172] (0xc0009b65a0) (5) Data frame handling\nI0524 00:58:15.300447 3525 log.go:172] (0xc0009b65a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0524 00:58:15.423758 3525 log.go:172] (0xc00096ce70) Data frame received for 3\nI0524 00:58:15.423793 3525 log.go:172] (0xc0009b6500) (3) Data frame handling\nI0524 00:58:15.423826 3525 log.go:172] (0xc0009b6500) (3) Data frame sent\nI0524 00:58:15.424978 3525 log.go:172] (0xc00096ce70) Data frame received for 3\nI0524 00:58:15.424998 3525 log.go:172] (0xc0009b6500) (3) Data frame handling\nI0524 00:58:15.425022 3525 log.go:172] (0xc0009b6500) (3) Data frame sent\nI0524 00:58:15.425947 3525 log.go:172] (0xc00096ce70) Data frame received for 5\nI0524 00:58:15.426052 3525 log.go:172] (0xc0009b65a0) (5) Data frame handling\nI0524 00:58:15.426389 3525 log.go:172] (0xc00096ce70) Data frame received for 3\nI0524 00:58:15.426482 3525 log.go:172] (0xc0009b6500) (3) Data frame handling\nI0524 00:58:15.428552 3525 log.go:172] (0xc00096ce70) Data frame received for 1\nI0524 00:58:15.428590 3525 log.go:172] (0xc0009b6460) (1) Data frame handling\nI0524 00:58:15.428632 3525 log.go:172] (0xc0009b6460) (1) Data frame sent\nI0524 00:58:15.428734 3525 log.go:172] (0xc00096ce70) (0xc0009b6460) Stream removed, broadcasting: 1\nI0524 00:58:15.428778 3525 log.go:172] (0xc00096ce70) Go away received\nI0524 00:58:15.429358 3525 log.go:172] (0xc00096ce70) (0xc0009b6460) Stream removed, broadcasting: 1\nI0524 00:58:15.429387 3525 log.go:172] (0xc00096ce70) (0xc0009b6500) Stream removed, broadcasting: 3\nI0524 00:58:15.429404 3525 log.go:172] (0xc00096ce70) (0xc0009b65a0) Stream removed, broadcasting: 5\n" May 24 00:58:15.435: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5242.svc.cluster.local\tcanonical name = externalsvc.services-5242.svc.cluster.local.\nName:\texternalsvc.services-5242.svc.cluster.local\nAddress: 10.107.190.59\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5242, will wait for the garbage collector to delete the pods May 24 00:58:15.523: INFO: Deleting ReplicationController externalsvc took: 6.505858ms May 24 00:58:15.723: INFO: Terminating ReplicationController externalsvc pods took: 200.191983ms May 24 00:58:25.347: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:58:25.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5242" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:20.803 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":288,"completed":254,"skipped":4212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:58:25.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 00:58:25.512: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-087f167e-ba5f-4153-a133-c0c74506f032" in namespace "security-context-test-6984" to be "Succeeded or Failed" May 24 00:58:25.515: INFO: Pod "busybox-readonly-false-087f167e-ba5f-4153-a133-c0c74506f032": Phase="Pending", Reason="", readiness=false. Elapsed: 3.203797ms May 24 00:58:27.519: INFO: Pod "busybox-readonly-false-087f167e-ba5f-4153-a133-c0c74506f032": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006706705s May 24 00:58:29.550: INFO: Pod "busybox-readonly-false-087f167e-ba5f-4153-a133-c0c74506f032": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038489238s May 24 00:58:29.551: INFO: Pod "busybox-readonly-false-087f167e-ba5f-4153-a133-c0c74506f032" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:58:29.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6984" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":288,"completed":255,"skipped":4266,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:58:29.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium May 24 00:58:29.718: INFO: Waiting up to 5m0s for pod "pod-8ddf5b27-f955-44d4-8ea1-539b6b728284" in namespace "emptydir-1684" to be "Succeeded or Failed" May 24 00:58:29.722: INFO: Pod "pod-8ddf5b27-f955-44d4-8ea1-539b6b728284": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317255ms May 24 00:58:31.759: INFO: Pod "pod-8ddf5b27-f955-44d4-8ea1-539b6b728284": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041758924s May 24 00:58:33.763: INFO: Pod "pod-8ddf5b27-f955-44d4-8ea1-539b6b728284": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045483529s STEP: Saw pod success May 24 00:58:33.763: INFO: Pod "pod-8ddf5b27-f955-44d4-8ea1-539b6b728284" satisfied condition "Succeeded or Failed" May 24 00:58:33.766: INFO: Trying to get logs from node latest-worker pod pod-8ddf5b27-f955-44d4-8ea1-539b6b728284 container test-container: STEP: delete the pod May 24 00:58:33.814: INFO: Waiting for pod pod-8ddf5b27-f955-44d4-8ea1-539b6b728284 to disappear May 24 00:58:33.836: INFO: Pod pod-8ddf5b27-f955-44d4-8ea1-539b6b728284 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 00:58:33.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1684" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":256,"skipped":4266,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 00:58:33.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-f3e90f90-edc9-46be-932e-998bb3cb482c in namespace container-probe-5105 May 24 00:58:38.000: INFO: Started pod liveness-f3e90f90-edc9-46be-932e-998bb3cb482c in namespace container-probe-5105 STEP: checking the pod's current state and verifying that restartCount is present May 24 00:58:38.003: INFO: Initial restart count of pod liveness-f3e90f90-edc9-46be-932e-998bb3cb482c is 0 May 24 00:58:52.176: INFO: Restart count of pod container-probe-5105/liveness-f3e90f90-edc9-46be-932e-998bb3cb482c is now 1 (14.172252744s elapsed) May 24 00:59:12.222: INFO: Restart count of pod container-probe-5105/liveness-f3e90f90-edc9-46be-932e-998bb3cb482c is now 2 (34.218386074s elapsed) May 24 00:59:32.276: INFO: Restart count of pod container-probe-5105/liveness-f3e90f90-edc9-46be-932e-998bb3cb482c is now 3 (54.273052051s elapsed) May 24 00:59:50.393: INFO: Restart count of pod container-probe-5105/liveness-f3e90f90-edc9-46be-932e-998bb3cb482c is now 4 (1m12.389420355s elapsed) May 24 01:00:58.609: INFO: Restart count of pod container-probe-5105/liveness-f3e90f90-edc9-46be-932e-998bb3cb482c is now 5 (2m20.606017895s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:00:58.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5105" for this suite. • [SLOW TEST:144.777 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":288,"completed":257,"skipped":4281,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:00:58.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command May 24 01:00:59.049: INFO: Waiting up to 5m0s for pod "var-expansion-e5c26399-9bd0-4974-b5eb-2f435e77f5b8" in namespace "var-expansion-9813" to be "Succeeded or Failed" May 24 01:00:59.106: INFO: Pod "var-expansion-e5c26399-9bd0-4974-b5eb-2f435e77f5b8": Phase="Pending", Reason="", readiness=false. Elapsed: 57.251883ms May 24 01:01:01.346: INFO: Pod "var-expansion-e5c26399-9bd0-4974-b5eb-2f435e77f5b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.296442803s May 24 01:01:03.350: INFO: Pod "var-expansion-e5c26399-9bd0-4974-b5eb-2f435e77f5b8": Phase="Running", Reason="", readiness=true. Elapsed: 4.301145734s May 24 01:01:05.354: INFO: Pod "var-expansion-e5c26399-9bd0-4974-b5eb-2f435e77f5b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.305072605s STEP: Saw pod success May 24 01:01:05.354: INFO: Pod "var-expansion-e5c26399-9bd0-4974-b5eb-2f435e77f5b8" satisfied condition "Succeeded or Failed" May 24 01:01:05.356: INFO: Trying to get logs from node latest-worker pod var-expansion-e5c26399-9bd0-4974-b5eb-2f435e77f5b8 container dapi-container: STEP: delete the pod May 24 01:01:05.421: INFO: Waiting for pod var-expansion-e5c26399-9bd0-4974-b5eb-2f435e77f5b8 to disappear May 24 01:01:05.425: INFO: Pod var-expansion-e5c26399-9bd0-4974-b5eb-2f435e77f5b8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:01:05.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9813" for this suite. • [SLOW TEST:6.745 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":288,"completed":258,"skipped":4283,"failed":0} S ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:01:05.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6304 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6304;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6304 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6304;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6304.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6304.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6304.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6304.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6304.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6304.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6304.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6304.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6304.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6304.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6304.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6304.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6304.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 37.215.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.215.37_udp@PTR;check="$$(dig +tcp +noall +answer +search 37.215.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.215.37_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6304 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6304;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6304 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6304;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6304.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6304.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6304.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6304.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6304.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6304.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6304.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6304.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6304.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6304.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6304.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6304.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6304.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 37.215.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.215.37_udp@PTR;check="$$(dig +tcp +noall +answer +search 37.215.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.215.37_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 24 01:01:11.770: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.790: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.793: INFO: Unable to read wheezy_udp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.796: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.800: INFO: Unable to read wheezy_udp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.813: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.816: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.851: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.874: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.877: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.880: INFO: Unable to read jessie_udp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.883: INFO: Unable to read jessie_tcp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.886: INFO: Unable to read jessie_udp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.889: INFO: Unable to read jessie_tcp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.892: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.896: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:11.915: INFO: Lookups using dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6304 wheezy_tcp@dns-test-service.dns-6304 wheezy_udp@dns-test-service.dns-6304.svc wheezy_tcp@dns-test-service.dns-6304.svc wheezy_udp@_http._tcp.dns-test-service.dns-6304.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6304.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6304 jessie_tcp@dns-test-service.dns-6304 jessie_udp@dns-test-service.dns-6304.svc jessie_tcp@dns-test-service.dns-6304.svc jessie_udp@_http._tcp.dns-test-service.dns-6304.svc jessie_tcp@_http._tcp.dns-test-service.dns-6304.svc] May 24 01:01:16.920: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:16.926: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:16.929: INFO: Unable to read wheezy_udp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:16.933: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:16.937: INFO: Unable to read wheezy_udp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:16.941: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:16.944: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:16.946: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:17.031: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:17.043: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:17.046: INFO: Unable to read jessie_udp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:17.050: INFO: Unable to read jessie_tcp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:17.053: INFO: Unable to read jessie_udp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:17.056: INFO: Unable to read jessie_tcp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:17.059: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:17.062: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:17.079: INFO: Lookups using dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6304 wheezy_tcp@dns-test-service.dns-6304 wheezy_udp@dns-test-service.dns-6304.svc wheezy_tcp@dns-test-service.dns-6304.svc wheezy_udp@_http._tcp.dns-test-service.dns-6304.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6304.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6304 jessie_tcp@dns-test-service.dns-6304 jessie_udp@dns-test-service.dns-6304.svc jessie_tcp@dns-test-service.dns-6304.svc jessie_udp@_http._tcp.dns-test-service.dns-6304.svc jessie_tcp@_http._tcp.dns-test-service.dns-6304.svc] May 24 01:01:21.919: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.923: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.927: INFO: Unable to read wheezy_udp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.930: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.933: INFO: Unable to read wheezy_udp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.936: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.939: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.958: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.978: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.981: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.984: INFO: Unable to read jessie_udp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.987: INFO: Unable to read jessie_tcp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.991: INFO: Unable to read jessie_udp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.994: INFO: Unable to read jessie_tcp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:21.997: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:22.000: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:22.018: INFO: Lookups using dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6304 wheezy_tcp@dns-test-service.dns-6304 wheezy_udp@dns-test-service.dns-6304.svc wheezy_tcp@dns-test-service.dns-6304.svc wheezy_udp@_http._tcp.dns-test-service.dns-6304.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6304.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6304 jessie_tcp@dns-test-service.dns-6304 jessie_udp@dns-test-service.dns-6304.svc jessie_tcp@dns-test-service.dns-6304.svc jessie_udp@_http._tcp.dns-test-service.dns-6304.svc jessie_tcp@_http._tcp.dns-test-service.dns-6304.svc] May 24 01:01:26.921: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.928: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.932: INFO: Unable to read wheezy_udp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.935: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.938: INFO: Unable to read wheezy_udp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.940: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.942: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.944: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.959: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.961: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.964: INFO: Unable to read jessie_udp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.967: INFO: Unable to read jessie_tcp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.969: INFO: Unable to read jessie_udp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.972: INFO: Unable to read jessie_tcp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.975: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.977: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:26.995: INFO: Lookups using dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6304 wheezy_tcp@dns-test-service.dns-6304 wheezy_udp@dns-test-service.dns-6304.svc wheezy_tcp@dns-test-service.dns-6304.svc wheezy_udp@_http._tcp.dns-test-service.dns-6304.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6304.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6304 jessie_tcp@dns-test-service.dns-6304 jessie_udp@dns-test-service.dns-6304.svc jessie_tcp@dns-test-service.dns-6304.svc jessie_udp@_http._tcp.dns-test-service.dns-6304.svc jessie_tcp@_http._tcp.dns-test-service.dns-6304.svc] May 24 01:01:31.920: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.923: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.925: INFO: Unable to read wheezy_udp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.928: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.931: INFO: Unable to read wheezy_udp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.933: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.935: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.937: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.954: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.956: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.959: INFO: Unable to read jessie_udp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.961: INFO: Unable to read jessie_tcp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.963: INFO: Unable to read jessie_udp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.965: INFO: Unable to read jessie_tcp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.967: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.970: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:31.986: INFO: Lookups using dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6304 wheezy_tcp@dns-test-service.dns-6304 wheezy_udp@dns-test-service.dns-6304.svc wheezy_tcp@dns-test-service.dns-6304.svc wheezy_udp@_http._tcp.dns-test-service.dns-6304.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6304.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6304 jessie_tcp@dns-test-service.dns-6304 jessie_udp@dns-test-service.dns-6304.svc jessie_tcp@dns-test-service.dns-6304.svc jessie_udp@_http._tcp.dns-test-service.dns-6304.svc jessie_tcp@_http._tcp.dns-test-service.dns-6304.svc] May 24 01:01:36.922: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.925: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.928: INFO: Unable to read wheezy_udp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.930: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.932: INFO: Unable to read wheezy_udp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.935: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.942: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.945: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.960: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.963: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.965: INFO: Unable to read jessie_udp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.970: INFO: Unable to read jessie_tcp@dns-test-service.dns-6304 from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.973: INFO: Unable to read jessie_udp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.975: INFO: Unable to read jessie_tcp@dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.977: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.980: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6304.svc from pod dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8: the server could not find the requested resource (get pods dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8) May 24 01:01:36.992: INFO: Lookups using dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6304 wheezy_tcp@dns-test-service.dns-6304 wheezy_udp@dns-test-service.dns-6304.svc wheezy_tcp@dns-test-service.dns-6304.svc wheezy_udp@_http._tcp.dns-test-service.dns-6304.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6304.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6304 jessie_tcp@dns-test-service.dns-6304 jessie_udp@dns-test-service.dns-6304.svc jessie_tcp@dns-test-service.dns-6304.svc jessie_udp@_http._tcp.dns-test-service.dns-6304.svc jessie_tcp@_http._tcp.dns-test-service.dns-6304.svc] May 24 01:01:42.008: INFO: DNS probes using dns-6304/dns-test-92e03e81-9890-4174-958e-5b0285f2a6b8 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:01:42.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6304" for this suite. • [SLOW TEST:37.444 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":288,"completed":259,"skipped":4284,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:01:42.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod May 24 01:03:43.526: INFO: Successfully updated pod "var-expansion-3c2f22af-167a-4803-ace9-93b84ea1e0aa" STEP: waiting for pod running STEP: deleting the pod gracefully May 24 01:03:45.557: INFO: Deleting pod "var-expansion-3c2f22af-167a-4803-ace9-93b84ea1e0aa" in namespace "var-expansion-8116" May 24 01:03:45.562: INFO: Wait up to 5m0s for pod "var-expansion-3c2f22af-167a-4803-ace9-93b84ea1e0aa" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:04:25.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8116" for this suite. • [SLOW TEST:162.733 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":288,"completed":260,"skipped":4297,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:04:25.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-aa112880-231c-4a30-88a2-c61ef74bfe6c STEP: Creating a pod to test consume configMaps May 24 01:04:25.725: INFO: Waiting up to 5m0s for pod "pod-configmaps-4121d8fb-8d62-47e5-b9e7-7c010d505de2" in namespace "configmap-7074" to be "Succeeded or Failed" May 24 01:04:25.742: INFO: Pod "pod-configmaps-4121d8fb-8d62-47e5-b9e7-7c010d505de2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.715538ms May 24 01:04:27.822: INFO: Pod "pod-configmaps-4121d8fb-8d62-47e5-b9e7-7c010d505de2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097149487s May 24 01:04:29.826: INFO: Pod "pod-configmaps-4121d8fb-8d62-47e5-b9e7-7c010d505de2": Phase="Running", Reason="", readiness=true. Elapsed: 4.101308818s May 24 01:04:31.831: INFO: Pod "pod-configmaps-4121d8fb-8d62-47e5-b9e7-7c010d505de2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.106033867s STEP: Saw pod success May 24 01:04:31.831: INFO: Pod "pod-configmaps-4121d8fb-8d62-47e5-b9e7-7c010d505de2" satisfied condition "Succeeded or Failed" May 24 01:04:31.834: INFO: Trying to get logs from node latest-worker pod pod-configmaps-4121d8fb-8d62-47e5-b9e7-7c010d505de2 container configmap-volume-test: STEP: delete the pod May 24 01:04:31.868: INFO: Waiting for pod pod-configmaps-4121d8fb-8d62-47e5-b9e7-7c010d505de2 to disappear May 24 01:04:31.903: INFO: Pod pod-configmaps-4121d8fb-8d62-47e5-b9e7-7c010d505de2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:04:31.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7074" for this suite. • [SLOW TEST:6.299 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":288,"completed":261,"skipped":4325,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:04:31.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5286 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-5286 May 24 01:04:32.062: INFO: Found 0 stateful pods, waiting for 1 May 24 01:04:42.067: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 May 24 01:04:42.082: INFO: Deleting all statefulset in ns statefulset-5286 May 24 01:04:42.084: INFO: Scaling statefulset ss to 0 May 24 01:05:12.196: INFO: Waiting for statefulset status.replicas updated to 0 May 24 01:05:12.199: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:05:12.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5286" for this suite. • [SLOW TEST:40.307 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":288,"completed":262,"skipped":4343,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:05:12.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7139 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7139 STEP: creating replication controller externalsvc in namespace services-7139 I0524 01:05:12.504864 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7139, replica count: 2 I0524 01:05:15.556541 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 01:05:18.556804 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 24 01:05:18.602: INFO: Creating new exec pod May 24 01:05:22.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-7139 execpod7mp2q -- /bin/sh -x -c nslookup clusterip-service' May 24 01:05:25.577: INFO: stderr: "I0524 01:05:25.461526 3546 log.go:172] (0xc000d56bb0) (0xc000859f40) Create stream\nI0524 01:05:25.461653 3546 log.go:172] (0xc000d56bb0) (0xc000859f40) Stream added, broadcasting: 1\nI0524 01:05:25.464558 3546 log.go:172] (0xc000d56bb0) Reply frame received for 1\nI0524 01:05:25.464589 3546 log.go:172] (0xc000d56bb0) (0xc000848d20) Create stream\nI0524 01:05:25.464596 3546 log.go:172] (0xc000d56bb0) (0xc000848d20) Stream added, broadcasting: 3\nI0524 01:05:25.465773 3546 log.go:172] (0xc000d56bb0) Reply frame received for 3\nI0524 01:05:25.465801 3546 log.go:172] (0xc000d56bb0) (0xc00084f540) Create stream\nI0524 01:05:25.465808 3546 log.go:172] (0xc000d56bb0) (0xc00084f540) Stream added, broadcasting: 5\nI0524 01:05:25.466591 3546 log.go:172] (0xc000d56bb0) Reply frame received for 5\nI0524 01:05:25.542851 3546 log.go:172] (0xc000d56bb0) Data frame received for 5\nI0524 01:05:25.542886 3546 log.go:172] (0xc00084f540) (5) Data frame handling\nI0524 01:05:25.542906 3546 log.go:172] (0xc00084f540) (5) Data frame sent\n+ nslookup clusterip-service\nI0524 01:05:25.566006 3546 log.go:172] (0xc000d56bb0) Data frame received for 3\nI0524 01:05:25.566033 3546 log.go:172] (0xc000848d20) (3) Data frame handling\nI0524 01:05:25.566051 3546 log.go:172] (0xc000848d20) (3) Data frame sent\nI0524 01:05:25.566768 3546 log.go:172] (0xc000d56bb0) Data frame received for 3\nI0524 01:05:25.566784 3546 log.go:172] (0xc000848d20) (3) Data frame handling\nI0524 01:05:25.566805 3546 log.go:172] (0xc000848d20) (3) Data frame sent\nI0524 01:05:25.567704 3546 log.go:172] (0xc000d56bb0) Data frame received for 3\nI0524 01:05:25.567729 3546 log.go:172] (0xc000d56bb0) Data frame received for 5\nI0524 01:05:25.567763 3546 log.go:172] (0xc00084f540) (5) Data frame handling\nI0524 01:05:25.567792 3546 log.go:172] (0xc000848d20) (3) Data frame handling\nI0524 01:05:25.569501 3546 log.go:172] (0xc000d56bb0) Data frame received for 1\nI0524 01:05:25.569527 3546 log.go:172] (0xc000859f40) (1) Data frame handling\nI0524 01:05:25.569541 3546 log.go:172] (0xc000859f40) (1) Data frame sent\nI0524 01:05:25.569780 3546 log.go:172] (0xc000d56bb0) (0xc000859f40) Stream removed, broadcasting: 1\nI0524 01:05:25.569801 3546 log.go:172] (0xc000d56bb0) Go away received\nI0524 01:05:25.570227 3546 log.go:172] (0xc000d56bb0) (0xc000859f40) Stream removed, broadcasting: 1\nI0524 01:05:25.570253 3546 log.go:172] (0xc000d56bb0) (0xc000848d20) Stream removed, broadcasting: 3\nI0524 01:05:25.570268 3546 log.go:172] (0xc000d56bb0) (0xc00084f540) Stream removed, broadcasting: 5\n" May 24 01:05:25.577: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-7139.svc.cluster.local\tcanonical name = externalsvc.services-7139.svc.cluster.local.\nName:\texternalsvc.services-7139.svc.cluster.local\nAddress: 10.107.45.161\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7139, will wait for the garbage collector to delete the pods May 24 01:05:25.653: INFO: Deleting ReplicationController externalsvc took: 8.01726ms May 24 01:05:25.753: INFO: Terminating ReplicationController externalsvc pods took: 100.185608ms May 24 01:05:35.321: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:05:35.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7139" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:23.228 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":288,"completed":263,"skipped":4359,"failed":0} [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:05:35.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3543 May 24 01:05:39.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3543 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 24 01:05:39.778: INFO: stderr: "I0524 01:05:39.692275 3579 log.go:172] (0xc000b571e0) (0xc000abc6e0) Create stream\nI0524 01:05:39.692335 3579 log.go:172] (0xc000b571e0) (0xc000abc6e0) Stream added, broadcasting: 1\nI0524 01:05:39.697381 3579 log.go:172] (0xc000b571e0) Reply frame received for 1\nI0524 01:05:39.697429 3579 log.go:172] (0xc000b571e0) (0xc00085ae60) Create stream\nI0524 01:05:39.697448 3579 log.go:172] (0xc000b571e0) (0xc00085ae60) Stream added, broadcasting: 3\nI0524 01:05:39.698477 3579 log.go:172] (0xc000b571e0) Reply frame received for 3\nI0524 01:05:39.698521 3579 log.go:172] (0xc000b571e0) (0xc00050ad20) Create stream\nI0524 01:05:39.698536 3579 log.go:172] (0xc000b571e0) (0xc00050ad20) Stream added, broadcasting: 5\nI0524 01:05:39.700173 3579 log.go:172] (0xc000b571e0) Reply frame received for 5\nI0524 01:05:39.765783 3579 log.go:172] (0xc000b571e0) Data frame received for 5\nI0524 01:05:39.765828 3579 log.go:172] (0xc00050ad20) (5) Data frame handling\nI0524 01:05:39.765848 3579 log.go:172] (0xc00050ad20) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0524 01:05:39.770474 3579 log.go:172] (0xc000b571e0) Data frame received for 3\nI0524 01:05:39.770502 3579 log.go:172] (0xc00085ae60) (3) Data frame handling\nI0524 01:05:39.770525 3579 log.go:172] (0xc00085ae60) (3) Data frame sent\nI0524 01:05:39.770846 3579 log.go:172] (0xc000b571e0) Data frame received for 5\nI0524 01:05:39.770872 3579 log.go:172] (0xc00050ad20) (5) Data frame handling\nI0524 01:05:39.770888 3579 log.go:172] (0xc000b571e0) Data frame received for 3\nI0524 01:05:39.770893 3579 log.go:172] (0xc00085ae60) (3) Data frame handling\nI0524 01:05:39.772484 3579 log.go:172] (0xc000b571e0) Data frame received for 1\nI0524 01:05:39.772508 3579 log.go:172] (0xc000abc6e0) (1) Data frame handling\nI0524 01:05:39.772526 3579 log.go:172] (0xc000abc6e0) (1) Data frame sent\nI0524 01:05:39.772542 3579 log.go:172] (0xc000b571e0) (0xc000abc6e0) Stream removed, broadcasting: 1\nI0524 01:05:39.772577 3579 log.go:172] (0xc000b571e0) Go away received\nI0524 01:05:39.772992 3579 log.go:172] (0xc000b571e0) (0xc000abc6e0) Stream removed, broadcasting: 1\nI0524 01:05:39.773021 3579 log.go:172] (0xc000b571e0) (0xc00085ae60) Stream removed, broadcasting: 3\nI0524 01:05:39.773036 3579 log.go:172] (0xc000b571e0) (0xc00050ad20) Stream removed, broadcasting: 5\n" May 24 01:05:39.778: INFO: stdout: "iptables" May 24 01:05:39.778: INFO: proxyMode: iptables May 24 01:05:39.795: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 24 01:05:39.833: INFO: Pod kube-proxy-mode-detector still exists May 24 01:05:41.834: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 24 01:05:41.877: INFO: Pod kube-proxy-mode-detector still exists May 24 01:05:43.834: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 24 01:05:43.838: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-3543 STEP: creating replication controller affinity-nodeport-timeout in namespace services-3543 I0524 01:05:43.913476 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-3543, replica count: 3 I0524 01:05:46.963919 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0524 01:05:49.964196 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 24 01:05:49.975: INFO: Creating new exec pod May 24 01:05:55.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3543 execpod-affinity764t8 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' May 24 01:05:55.233: INFO: stderr: "I0524 01:05:55.130062 3601 log.go:172] (0xc000ab5760) (0xc000840f00) Create stream\nI0524 01:05:55.130137 3601 log.go:172] (0xc000ab5760) (0xc000840f00) Stream added, broadcasting: 1\nI0524 01:05:55.132311 3601 log.go:172] (0xc000ab5760) Reply frame received for 1\nI0524 01:05:55.132355 3601 log.go:172] (0xc000ab5760) (0xc000afc140) Create stream\nI0524 01:05:55.132367 3601 log.go:172] (0xc000ab5760) (0xc000afc140) Stream added, broadcasting: 3\nI0524 01:05:55.133422 3601 log.go:172] (0xc000ab5760) Reply frame received for 3\nI0524 01:05:55.133464 3601 log.go:172] (0xc000ab5760) (0xc0004897c0) Create stream\nI0524 01:05:55.133476 3601 log.go:172] (0xc000ab5760) (0xc0004897c0) Stream added, broadcasting: 5\nI0524 01:05:55.134313 3601 log.go:172] (0xc000ab5760) Reply frame received for 5\nI0524 01:05:55.209426 3601 log.go:172] (0xc000ab5760) Data frame received for 5\nI0524 01:05:55.209466 3601 log.go:172] (0xc0004897c0) (5) Data frame handling\nI0524 01:05:55.209509 3601 log.go:172] (0xc0004897c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI0524 01:05:55.226000 3601 log.go:172] (0xc000ab5760) Data frame received for 5\nI0524 01:05:55.226023 3601 log.go:172] (0xc0004897c0) (5) Data frame handling\nI0524 01:05:55.226036 3601 log.go:172] (0xc0004897c0) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0524 01:05:55.226330 3601 log.go:172] (0xc000ab5760) Data frame received for 3\nI0524 01:05:55.226354 3601 log.go:172] (0xc000afc140) (3) Data frame handling\nI0524 01:05:55.226816 3601 log.go:172] (0xc000ab5760) Data frame received for 5\nI0524 01:05:55.226839 3601 log.go:172] (0xc0004897c0) (5) Data frame handling\nI0524 01:05:55.228290 3601 log.go:172] (0xc000ab5760) Data frame received for 1\nI0524 01:05:55.228306 3601 log.go:172] (0xc000840f00) (1) Data frame handling\nI0524 01:05:55.228320 3601 log.go:172] (0xc000840f00) (1) Data frame sent\nI0524 01:05:55.228434 3601 log.go:172] (0xc000ab5760) (0xc000840f00) Stream removed, broadcasting: 1\nI0524 01:05:55.228507 3601 log.go:172] (0xc000ab5760) Go away received\nI0524 01:05:55.229051 3601 log.go:172] (0xc000ab5760) (0xc000840f00) Stream removed, broadcasting: 1\nI0524 01:05:55.229077 3601 log.go:172] (0xc000ab5760) (0xc000afc140) Stream removed, broadcasting: 3\nI0524 01:05:55.229087 3601 log.go:172] (0xc000ab5760) (0xc0004897c0) Stream removed, broadcasting: 5\n" May 24 01:05:55.233: INFO: stdout: "" May 24 01:05:55.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3543 execpod-affinity764t8 -- /bin/sh -x -c nc -zv -t -w 2 10.105.2.182 80' May 24 01:05:55.459: INFO: stderr: "I0524 01:05:55.376729 3622 log.go:172] (0xc000a27340) (0xc000aae1e0) Create stream\nI0524 01:05:55.376795 3622 log.go:172] (0xc000a27340) (0xc000aae1e0) Stream added, broadcasting: 1\nI0524 01:05:55.381685 3622 log.go:172] (0xc000a27340) Reply frame received for 1\nI0524 01:05:55.381724 3622 log.go:172] (0xc000a27340) (0xc000710e60) Create stream\nI0524 01:05:55.381736 3622 log.go:172] (0xc000a27340) (0xc000710e60) Stream added, broadcasting: 3\nI0524 01:05:55.382468 3622 log.go:172] (0xc000a27340) Reply frame received for 3\nI0524 01:05:55.382496 3622 log.go:172] (0xc000a27340) (0xc000410d20) Create stream\nI0524 01:05:55.382506 3622 log.go:172] (0xc000a27340) (0xc000410d20) Stream added, broadcasting: 5\nI0524 01:05:55.383382 3622 log.go:172] (0xc000a27340) Reply frame received for 5\nI0524 01:05:55.453613 3622 log.go:172] (0xc000a27340) Data frame received for 5\nI0524 01:05:55.453667 3622 log.go:172] (0xc000410d20) (5) Data frame handling\nI0524 01:05:55.453685 3622 log.go:172] (0xc000410d20) (5) Data frame sent\nI0524 01:05:55.453704 3622 log.go:172] (0xc000a27340) Data frame received for 5\nI0524 01:05:55.453721 3622 log.go:172] (0xc000410d20) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.2.182 80\nConnection to 10.105.2.182 80 port [tcp/http] succeeded!\nI0524 01:05:55.453762 3622 log.go:172] (0xc000a27340) Data frame received for 3\nI0524 01:05:55.453818 3622 log.go:172] (0xc000710e60) (3) Data frame handling\nI0524 01:05:55.454693 3622 log.go:172] (0xc000a27340) Data frame received for 1\nI0524 01:05:55.454718 3622 log.go:172] (0xc000aae1e0) (1) Data frame handling\nI0524 01:05:55.454729 3622 log.go:172] (0xc000aae1e0) (1) Data frame sent\nI0524 01:05:55.454742 3622 log.go:172] (0xc000a27340) (0xc000aae1e0) Stream removed, broadcasting: 1\nI0524 01:05:55.454754 3622 log.go:172] (0xc000a27340) Go away received\nI0524 01:05:55.455192 3622 log.go:172] (0xc000a27340) (0xc000aae1e0) Stream removed, broadcasting: 1\nI0524 01:05:55.455210 3622 log.go:172] (0xc000a27340) (0xc000710e60) Stream removed, broadcasting: 3\nI0524 01:05:55.455219 3622 log.go:172] (0xc000a27340) (0xc000410d20) Stream removed, broadcasting: 5\n" May 24 01:05:55.460: INFO: stdout: "" May 24 01:05:55.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3543 execpod-affinity764t8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31814' May 24 01:05:55.675: INFO: stderr: "I0524 01:05:55.597242 3642 log.go:172] (0xc000a8f080) (0xc000af6500) Create stream\nI0524 01:05:55.597306 3642 log.go:172] (0xc000a8f080) (0xc000af6500) Stream added, broadcasting: 1\nI0524 01:05:55.602799 3642 log.go:172] (0xc000a8f080) Reply frame received for 1\nI0524 01:05:55.602831 3642 log.go:172] (0xc000a8f080) (0xc000842d20) Create stream\nI0524 01:05:55.602839 3642 log.go:172] (0xc000a8f080) (0xc000842d20) Stream added, broadcasting: 3\nI0524 01:05:55.603605 3642 log.go:172] (0xc000a8f080) Reply frame received for 3\nI0524 01:05:55.603632 3642 log.go:172] (0xc000a8f080) (0xc0008285a0) Create stream\nI0524 01:05:55.603641 3642 log.go:172] (0xc000a8f080) (0xc0008285a0) Stream added, broadcasting: 5\nI0524 01:05:55.604640 3642 log.go:172] (0xc000a8f080) Reply frame received for 5\nI0524 01:05:55.669105 3642 log.go:172] (0xc000a8f080) Data frame received for 3\nI0524 01:05:55.669350 3642 log.go:172] (0xc000842d20) (3) Data frame handling\nI0524 01:05:55.669383 3642 log.go:172] (0xc000a8f080) Data frame received for 5\nI0524 01:05:55.669403 3642 log.go:172] (0xc0008285a0) (5) Data frame handling\nI0524 01:05:55.669425 3642 log.go:172] (0xc0008285a0) (5) Data frame sent\nI0524 01:05:55.669446 3642 log.go:172] (0xc000a8f080) Data frame received for 5\nI0524 01:05:55.669456 3642 log.go:172] (0xc0008285a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31814\nConnection to 172.17.0.13 31814 port [tcp/31814] succeeded!\nI0524 01:05:55.670734 3642 log.go:172] (0xc000a8f080) Data frame received for 1\nI0524 01:05:55.670748 3642 log.go:172] (0xc000af6500) (1) Data frame handling\nI0524 01:05:55.670754 3642 log.go:172] (0xc000af6500) (1) Data frame sent\nI0524 01:05:55.670776 3642 log.go:172] (0xc000a8f080) (0xc000af6500) Stream removed, broadcasting: 1\nI0524 01:05:55.670887 3642 log.go:172] (0xc000a8f080) Go away received\nI0524 01:05:55.671030 3642 log.go:172] (0xc000a8f080) (0xc000af6500) Stream removed, broadcasting: 1\nI0524 01:05:55.671043 3642 log.go:172] (0xc000a8f080) (0xc000842d20) Stream removed, broadcasting: 3\nI0524 01:05:55.671049 3642 log.go:172] (0xc000a8f080) (0xc0008285a0) Stream removed, broadcasting: 5\n" May 24 01:05:55.675: INFO: stdout: "" May 24 01:05:55.675: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3543 execpod-affinity764t8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31814' May 24 01:05:55.875: INFO: stderr: "I0524 01:05:55.804962 3663 log.go:172] (0xc00003aa50) (0xc00053a140) Create stream\nI0524 01:05:55.805025 3663 log.go:172] (0xc00003aa50) (0xc00053a140) Stream added, broadcasting: 1\nI0524 01:05:55.807768 3663 log.go:172] (0xc00003aa50) Reply frame received for 1\nI0524 01:05:55.807810 3663 log.go:172] (0xc00003aa50) (0xc000446c80) Create stream\nI0524 01:05:55.807823 3663 log.go:172] (0xc00003aa50) (0xc000446c80) Stream added, broadcasting: 3\nI0524 01:05:55.808930 3663 log.go:172] (0xc00003aa50) Reply frame received for 3\nI0524 01:05:55.808976 3663 log.go:172] (0xc00003aa50) (0xc00013a6e0) Create stream\nI0524 01:05:55.808993 3663 log.go:172] (0xc00003aa50) (0xc00013a6e0) Stream added, broadcasting: 5\nI0524 01:05:55.810079 3663 log.go:172] (0xc00003aa50) Reply frame received for 5\nI0524 01:05:55.865725 3663 log.go:172] (0xc00003aa50) Data frame received for 5\nI0524 01:05:55.865762 3663 log.go:172] (0xc00013a6e0) (5) Data frame handling\nI0524 01:05:55.865784 3663 log.go:172] (0xc00013a6e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31814\nConnection to 172.17.0.12 31814 port [tcp/31814] succeeded!\nI0524 01:05:55.865826 3663 log.go:172] (0xc00003aa50) Data frame received for 3\nI0524 01:05:55.865840 3663 log.go:172] (0xc000446c80) (3) Data frame handling\nI0524 01:05:55.867722 3663 log.go:172] (0xc00003aa50) Data frame received for 5\nI0524 01:05:55.867740 3663 log.go:172] (0xc00013a6e0) (5) Data frame handling\nI0524 01:05:55.869578 3663 log.go:172] (0xc00003aa50) Data frame received for 1\nI0524 01:05:55.869596 3663 log.go:172] (0xc00053a140) (1) Data frame handling\nI0524 01:05:55.869616 3663 log.go:172] (0xc00053a140) (1) Data frame sent\nI0524 01:05:55.869631 3663 log.go:172] (0xc00003aa50) (0xc00053a140) Stream removed, broadcasting: 1\nI0524 01:05:55.869886 3663 log.go:172] (0xc00003aa50) Go away received\nI0524 01:05:55.869931 3663 log.go:172] (0xc00003aa50) (0xc00053a140) Stream removed, broadcasting: 1\nI0524 01:05:55.869946 3663 log.go:172] (0xc00003aa50) (0xc000446c80) Stream removed, broadcasting: 3\nI0524 01:05:55.869953 3663 log.go:172] (0xc00003aa50) (0xc00013a6e0) Stream removed, broadcasting: 5\n" May 24 01:05:55.875: INFO: stdout: "" May 24 01:05:55.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3543 execpod-affinity764t8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.17.0.13:31814/ ; done' May 24 01:05:56.186: INFO: stderr: "I0524 01:05:56.020505 3684 log.go:172] (0xc000994bb0) (0xc000a6a320) Create stream\nI0524 01:05:56.020549 3684 log.go:172] (0xc000994bb0) (0xc000a6a320) Stream added, broadcasting: 1\nI0524 01:05:56.024543 3684 log.go:172] (0xc000994bb0) Reply frame received for 1\nI0524 01:05:56.024578 3684 log.go:172] (0xc000994bb0) (0xc0006840a0) Create stream\nI0524 01:05:56.024588 3684 log.go:172] (0xc000994bb0) (0xc0006840a0) Stream added, broadcasting: 3\nI0524 01:05:56.025563 3684 log.go:172] (0xc000994bb0) Reply frame received for 3\nI0524 01:05:56.025614 3684 log.go:172] (0xc000994bb0) (0xc00065a140) Create stream\nI0524 01:05:56.025633 3684 log.go:172] (0xc000994bb0) (0xc00065a140) Stream added, broadcasting: 5\nI0524 01:05:56.026451 3684 log.go:172] (0xc000994bb0) Reply frame received for 5\nI0524 01:05:56.102217 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.102275 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.102303 3684 log.go:172] (0xc00065a140) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.102371 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.102417 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.102454 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.108721 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.108754 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.108781 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.110050 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.110081 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.110101 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.110132 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.110148 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.110173 3684 log.go:172] (0xc00065a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.114943 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.114963 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.114974 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.115739 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.115778 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.115792 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.115810 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.115827 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.115839 3684 log.go:172] (0xc00065a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.119763 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.119784 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.119802 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.120358 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.120374 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.120383 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.120459 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.120483 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.120499 3684 log.go:172] (0xc00065a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.124154 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.124177 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.124193 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.124481 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.124505 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.124531 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.124545 3684 log.go:172] (0xc00065a140) (5) Data frame sent\nI0524 01:05:56.124559 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.124580 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.128482 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.128509 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.128517 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.129454 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.129471 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.129482 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.129498 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.129504 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.129510 3684 log.go:172] (0xc00065a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.132385 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.132397 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.132403 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.132831 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.132849 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.132863 3684 log.go:172] (0xc00065a140) (5) Data frame sent\nI0524 01:05:56.132872 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.132879 3684 log.go:172] (0xc00065a140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.132897 3684 log.go:172] (0xc00065a140) (5) Data frame sent\nI0524 01:05:56.132964 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.132978 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.132990 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.137548 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.137561 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.137570 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.137722 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.137734 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.137744 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.137778 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.137805 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.137837 3684 log.go:172] (0xc00065a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.142810 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.142838 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.142858 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.143681 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.143774 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.143881 3684 log.go:172] (0xc00065a140) (5) Data frame sent\nI0524 01:05:56.143938 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.144004 3684 log.go:172] (0xc00065a140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.144116 3684 log.go:172] (0xc00065a140) (5) Data frame sent\nI0524 01:05:56.144167 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.144205 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.144255 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.151430 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.151466 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.151491 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.151745 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.151782 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.151800 3684 log.go:172] (0xc00065a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.151819 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.151827 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.151833 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.155742 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.155764 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.155782 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.156211 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.156237 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.156244 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.156259 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.156277 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.156292 3684 log.go:172] (0xc00065a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.160999 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.161014 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.161025 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.161745 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.161766 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.161773 3684 log.go:172] (0xc00065a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.161794 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.161800 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.161806 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.165436 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.165524 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.165555 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.165766 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.165781 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.165788 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.165797 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.165807 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.165812 3684 log.go:172] (0xc00065a140) (5) Data frame sent\nI0524 01:05:56.165820 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.165828 3684 log.go:172] (0xc00065a140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.165843 3684 log.go:172] (0xc00065a140) (5) Data frame sent\nI0524 01:05:56.169009 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.169023 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.169034 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.169779 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.169798 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.169824 3684 log.go:172] (0xc00065a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.169837 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.169862 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.169871 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.172777 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.172794 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.172808 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.173233 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.173254 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.173261 3684 log.go:172] (0xc00065a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.173271 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.173276 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.173281 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.176266 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.176282 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.176293 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.176623 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.176642 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.176655 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.176671 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.176683 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.176692 3684 log.go:172] (0xc00065a140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.179718 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.179744 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.179754 3684 log.go:172] (0xc0006840a0) (3) Data frame sent\nI0524 01:05:56.180355 3684 log.go:172] (0xc000994bb0) Data frame received for 5\nI0524 01:05:56.180371 3684 log.go:172] (0xc00065a140) (5) Data frame handling\nI0524 01:05:56.180389 3684 log.go:172] (0xc000994bb0) Data frame received for 3\nI0524 01:05:56.180399 3684 log.go:172] (0xc0006840a0) (3) Data frame handling\nI0524 01:05:56.182168 3684 log.go:172] (0xc000994bb0) Data frame received for 1\nI0524 01:05:56.182206 3684 log.go:172] (0xc000a6a320) (1) Data frame handling\nI0524 01:05:56.182224 3684 log.go:172] (0xc000a6a320) (1) Data frame sent\nI0524 01:05:56.182245 3684 log.go:172] (0xc000994bb0) (0xc000a6a320) Stream removed, broadcasting: 1\nI0524 01:05:56.182319 3684 log.go:172] (0xc000994bb0) Go away received\nI0524 01:05:56.182560 3684 log.go:172] (0xc000994bb0) (0xc000a6a320) Stream removed, broadcasting: 1\nI0524 01:05:56.182581 3684 log.go:172] (0xc000994bb0) (0xc0006840a0) Stream removed, broadcasting: 3\nI0524 01:05:56.182596 3684 log.go:172] (0xc000994bb0) (0xc00065a140) Stream removed, broadcasting: 5\n" May 24 01:05:56.186: INFO: stdout: "\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf\naffinity-nodeport-timeout-22jwf" May 24 01:05:56.187: INFO: Received response from host: May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Received response from host: affinity-nodeport-timeout-22jwf May 24 01:05:56.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3543 execpod-affinity764t8 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31814/' May 24 01:05:56.426: INFO: stderr: "I0524 01:05:56.340988 3704 log.go:172] (0xc000ae96b0) (0xc000b2e6e0) Create stream\nI0524 01:05:56.341050 3704 log.go:172] (0xc000ae96b0) (0xc000b2e6e0) Stream added, broadcasting: 1\nI0524 01:05:56.344119 3704 log.go:172] (0xc000ae96b0) Reply frame received for 1\nI0524 01:05:56.344162 3704 log.go:172] (0xc000ae96b0) (0xc000bc41e0) Create stream\nI0524 01:05:56.344178 3704 log.go:172] (0xc000ae96b0) (0xc000bc41e0) Stream added, broadcasting: 3\nI0524 01:05:56.345418 3704 log.go:172] (0xc000ae96b0) Reply frame received for 3\nI0524 01:05:56.345476 3704 log.go:172] (0xc000ae96b0) (0xc000bc4280) Create stream\nI0524 01:05:56.345497 3704 log.go:172] (0xc000ae96b0) (0xc000bc4280) Stream added, broadcasting: 5\nI0524 01:05:56.346692 3704 log.go:172] (0xc000ae96b0) Reply frame received for 5\nI0524 01:05:56.415409 3704 log.go:172] (0xc000ae96b0) Data frame received for 5\nI0524 01:05:56.415434 3704 log.go:172] (0xc000bc4280) (5) Data frame handling\nI0524 01:05:56.415459 3704 log.go:172] (0xc000bc4280) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:05:56.419072 3704 log.go:172] (0xc000ae96b0) Data frame received for 3\nI0524 01:05:56.419091 3704 log.go:172] (0xc000bc41e0) (3) Data frame handling\nI0524 01:05:56.419114 3704 log.go:172] (0xc000bc41e0) (3) Data frame sent\nI0524 01:05:56.419696 3704 log.go:172] (0xc000ae96b0) Data frame received for 5\nI0524 01:05:56.419729 3704 log.go:172] (0xc000bc4280) (5) Data frame handling\nI0524 01:05:56.420072 3704 log.go:172] (0xc000ae96b0) Data frame received for 3\nI0524 01:05:56.420084 3704 log.go:172] (0xc000bc41e0) (3) Data frame handling\nI0524 01:05:56.421565 3704 log.go:172] (0xc000ae96b0) Data frame received for 1\nI0524 01:05:56.421587 3704 log.go:172] (0xc000b2e6e0) (1) Data frame handling\nI0524 01:05:56.421598 3704 log.go:172] (0xc000b2e6e0) (1) Data frame sent\nI0524 01:05:56.421615 3704 log.go:172] (0xc000ae96b0) (0xc000b2e6e0) Stream removed, broadcasting: 1\nI0524 01:05:56.421640 3704 log.go:172] (0xc000ae96b0) Go away received\nI0524 01:05:56.421947 3704 log.go:172] (0xc000ae96b0) (0xc000b2e6e0) Stream removed, broadcasting: 1\nI0524 01:05:56.421965 3704 log.go:172] (0xc000ae96b0) (0xc000bc41e0) Stream removed, broadcasting: 3\nI0524 01:05:56.421972 3704 log.go:172] (0xc000ae96b0) (0xc000bc4280) Stream removed, broadcasting: 5\n" May 24 01:05:56.426: INFO: stdout: "affinity-nodeport-timeout-22jwf" May 24 01:06:11.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config exec --namespace=services-3543 execpod-affinity764t8 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.17.0.13:31814/' May 24 01:06:11.669: INFO: stderr: "I0524 01:06:11.558223 3724 log.go:172] (0xc000a28790) (0xc000139720) Create stream\nI0524 01:06:11.558278 3724 log.go:172] (0xc000a28790) (0xc000139720) Stream added, broadcasting: 1\nI0524 01:06:11.560672 3724 log.go:172] (0xc000a28790) Reply frame received for 1\nI0524 01:06:11.560702 3724 log.go:172] (0xc000a28790) (0xc00056a140) Create stream\nI0524 01:06:11.560711 3724 log.go:172] (0xc000a28790) (0xc00056a140) Stream added, broadcasting: 3\nI0524 01:06:11.561717 3724 log.go:172] (0xc000a28790) Reply frame received for 3\nI0524 01:06:11.561760 3724 log.go:172] (0xc000a28790) (0xc0006e6aa0) Create stream\nI0524 01:06:11.561776 3724 log.go:172] (0xc000a28790) (0xc0006e6aa0) Stream added, broadcasting: 5\nI0524 01:06:11.562705 3724 log.go:172] (0xc000a28790) Reply frame received for 5\nI0524 01:06:11.656499 3724 log.go:172] (0xc000a28790) Data frame received for 5\nI0524 01:06:11.656520 3724 log.go:172] (0xc0006e6aa0) (5) Data frame handling\nI0524 01:06:11.656532 3724 log.go:172] (0xc0006e6aa0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.17.0.13:31814/\nI0524 01:06:11.660714 3724 log.go:172] (0xc000a28790) Data frame received for 3\nI0524 01:06:11.660728 3724 log.go:172] (0xc00056a140) (3) Data frame handling\nI0524 01:06:11.660739 3724 log.go:172] (0xc00056a140) (3) Data frame sent\nI0524 01:06:11.661563 3724 log.go:172] (0xc000a28790) Data frame received for 3\nI0524 01:06:11.661584 3724 log.go:172] (0xc00056a140) (3) Data frame handling\nI0524 01:06:11.661973 3724 log.go:172] (0xc000a28790) Data frame received for 5\nI0524 01:06:11.661990 3724 log.go:172] (0xc0006e6aa0) (5) Data frame handling\nI0524 01:06:11.663367 3724 log.go:172] (0xc000a28790) Data frame received for 1\nI0524 01:06:11.663391 3724 log.go:172] (0xc000139720) (1) Data frame handling\nI0524 01:06:11.663420 3724 log.go:172] (0xc000139720) (1) Data frame sent\nI0524 01:06:11.663443 3724 log.go:172] (0xc000a28790) (0xc000139720) Stream removed, broadcasting: 1\nI0524 01:06:11.663771 3724 log.go:172] (0xc000a28790) Go away received\nI0524 01:06:11.663982 3724 log.go:172] (0xc000a28790) (0xc000139720) Stream removed, broadcasting: 1\nI0524 01:06:11.664015 3724 log.go:172] (0xc000a28790) (0xc00056a140) Stream removed, broadcasting: 3\nI0524 01:06:11.664033 3724 log.go:172] (0xc000a28790) (0xc0006e6aa0) Stream removed, broadcasting: 5\n" May 24 01:06:11.669: INFO: stdout: "affinity-nodeport-timeout-xz8xm" May 24 01:06:11.669: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-3543, will wait for the garbage collector to delete the pods May 24 01:06:11.765: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 22.623016ms May 24 01:06:12.265: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.272094ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:06:25.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3543" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:49.888 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":288,"completed":264,"skipped":4359,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:06:25.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath May 24 01:06:25.431: INFO: Waiting up to 5m0s for pod "var-expansion-e8fe7b08-f421-4e70-a477-d1c4c1c09b3f" in namespace "var-expansion-6102" to be "Succeeded or Failed" May 24 01:06:25.484: INFO: Pod "var-expansion-e8fe7b08-f421-4e70-a477-d1c4c1c09b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 52.245195ms May 24 01:06:27.547: INFO: Pod "var-expansion-e8fe7b08-f421-4e70-a477-d1c4c1c09b3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11552348s May 24 01:06:29.555: INFO: Pod "var-expansion-e8fe7b08-f421-4e70-a477-d1c4c1c09b3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123824497s STEP: Saw pod success May 24 01:06:29.555: INFO: Pod "var-expansion-e8fe7b08-f421-4e70-a477-d1c4c1c09b3f" satisfied condition "Succeeded or Failed" May 24 01:06:29.559: INFO: Trying to get logs from node latest-worker2 pod var-expansion-e8fe7b08-f421-4e70-a477-d1c4c1c09b3f container dapi-container: STEP: delete the pod May 24 01:06:29.633: INFO: Waiting for pod var-expansion-e8fe7b08-f421-4e70-a477-d1c4c1c09b3f to disappear May 24 01:06:29.639: INFO: Pod var-expansion-e8fe7b08-f421-4e70-a477-d1c4c1c09b3f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:06:29.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6102" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":288,"completed":265,"skipped":4360,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:06:29.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:06:45.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4186" for this suite. • [SLOW TEST:16.139 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":288,"completed":266,"skipped":4369,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:06:45.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 24 01:06:45.894: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d68af4ad-f187-490f-aca7-7846677cd214" in namespace "projected-9283" to be "Succeeded or Failed" May 24 01:06:45.908: INFO: Pod "downwardapi-volume-d68af4ad-f187-490f-aca7-7846677cd214": Phase="Pending", Reason="", readiness=false. Elapsed: 14.159766ms May 24 01:06:48.089: INFO: Pod "downwardapi-volume-d68af4ad-f187-490f-aca7-7846677cd214": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195337103s May 24 01:06:50.093: INFO: Pod "downwardapi-volume-d68af4ad-f187-490f-aca7-7846677cd214": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199085875s STEP: Saw pod success May 24 01:06:50.093: INFO: Pod "downwardapi-volume-d68af4ad-f187-490f-aca7-7846677cd214" satisfied condition "Succeeded or Failed" May 24 01:06:50.095: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d68af4ad-f187-490f-aca7-7846677cd214 container client-container: STEP: delete the pod May 24 01:06:50.162: INFO: Waiting for pod downwardapi-volume-d68af4ad-f187-490f-aca7-7846677cd214 to disappear May 24 01:06:50.166: INFO: Pod downwardapi-volume-d68af4ad-f187-490f-aca7-7846677cd214 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:06:50.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9283" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":267,"skipped":4371,"failed":0} ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:06:50.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 24 01:06:56.798: INFO: Successfully updated pod "adopt-release-ng545" STEP: Checking that the Job readopts the Pod May 24 01:06:56.798: INFO: Waiting up to 15m0s for pod "adopt-release-ng545" in namespace "job-7998" to be "adopted" May 24 01:06:56.826: INFO: Pod "adopt-release-ng545": Phase="Running", Reason="", readiness=true. Elapsed: 28.515987ms May 24 01:06:58.830: INFO: Pod "adopt-release-ng545": Phase="Running", Reason="", readiness=true. Elapsed: 2.032538415s May 24 01:06:58.830: INFO: Pod "adopt-release-ng545" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 24 01:06:59.337: INFO: Successfully updated pod "adopt-release-ng545" STEP: Checking that the Job releases the Pod May 24 01:06:59.337: INFO: Waiting up to 15m0s for pod "adopt-release-ng545" in namespace "job-7998" to be "released" May 24 01:06:59.379: INFO: Pod "adopt-release-ng545": Phase="Running", Reason="", readiness=true. Elapsed: 41.759097ms May 24 01:07:01.428: INFO: Pod "adopt-release-ng545": Phase="Running", Reason="", readiness=true. Elapsed: 2.090979897s May 24 01:07:01.428: INFO: Pod "adopt-release-ng545" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:07:01.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7998" for this suite. • [SLOW TEST:11.240 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":288,"completed":268,"skipped":4371,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:07:01.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:07:05.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5612" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":288,"completed":269,"skipped":4373,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:07:05.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 01:07:05.834: INFO: Creating ReplicaSet my-hostname-basic-0ae801c1-82a5-4082-925a-a11289f980da May 24 01:07:05.844: INFO: Pod name my-hostname-basic-0ae801c1-82a5-4082-925a-a11289f980da: Found 0 pods out of 1 May 24 01:07:10.847: INFO: Pod name my-hostname-basic-0ae801c1-82a5-4082-925a-a11289f980da: Found 1 pods out of 1 May 24 01:07:10.847: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-0ae801c1-82a5-4082-925a-a11289f980da" is running May 24 01:07:10.850: INFO: Pod "my-hostname-basic-0ae801c1-82a5-4082-925a-a11289f980da-qjxm6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 01:07:05 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 01:07:08 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 01:07:08 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-24 01:07:05 +0000 UTC Reason: Message:}]) May 24 01:07:10.850: INFO: Trying to dial the pod May 24 01:07:15.860: INFO: Controller my-hostname-basic-0ae801c1-82a5-4082-925a-a11289f980da: Got expected result from replica 1 [my-hostname-basic-0ae801c1-82a5-4082-925a-a11289f980da-qjxm6]: "my-hostname-basic-0ae801c1-82a5-4082-925a-a11289f980da-qjxm6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:07:15.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2009" for this suite. • [SLOW TEST:10.145 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":288,"completed":270,"skipped":4377,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:07:15.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 01:07:15.943: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 24 01:07:18.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4491 create -f -' May 24 01:07:22.129: INFO: stderr: "" May 24 01:07:22.129: INFO: stdout: "e2e-test-crd-publish-openapi-5276-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 24 01:07:22.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4491 delete e2e-test-crd-publish-openapi-5276-crds test-cr' May 24 01:07:22.252: INFO: stderr: "" May 24 01:07:22.252: INFO: stdout: "e2e-test-crd-publish-openapi-5276-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 24 01:07:22.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4491 apply -f -' May 24 01:07:22.548: INFO: stderr: "" May 24 01:07:22.548: INFO: stdout: "e2e-test-crd-publish-openapi-5276-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 24 01:07:22.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4491 delete e2e-test-crd-publish-openapi-5276-crds test-cr' May 24 01:07:22.645: INFO: stderr: "" May 24 01:07:22.645: INFO: stdout: "e2e-test-crd-publish-openapi-5276-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 24 01:07:22.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32773 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5276-crds' May 24 01:07:22.917: INFO: stderr: "" May 24 01:07:22.917: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5276-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:07:24.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4491" for this suite. • [SLOW TEST:9.016 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":288,"completed":271,"skipped":4385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:07:24.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-2f8ae88c-cd95-4381-a07e-08ff6af65a11 STEP: Creating a pod to test consume configMaps May 24 01:07:24.994: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1691ba7d-d593-4b55-a1c1-36631c3c113f" in namespace "projected-8074" to be "Succeeded or Failed" May 24 01:07:25.012: INFO: Pod "pod-projected-configmaps-1691ba7d-d593-4b55-a1c1-36631c3c113f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.942323ms May 24 01:07:27.017: INFO: Pod "pod-projected-configmaps-1691ba7d-d593-4b55-a1c1-36631c3c113f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022592594s May 24 01:07:29.022: INFO: Pod "pod-projected-configmaps-1691ba7d-d593-4b55-a1c1-36631c3c113f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027285972s STEP: Saw pod success May 24 01:07:29.022: INFO: Pod "pod-projected-configmaps-1691ba7d-d593-4b55-a1c1-36631c3c113f" satisfied condition "Succeeded or Failed" May 24 01:07:29.025: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-1691ba7d-d593-4b55-a1c1-36631c3c113f container projected-configmap-volume-test: STEP: delete the pod May 24 01:07:29.202: INFO: Waiting for pod pod-projected-configmaps-1691ba7d-d593-4b55-a1c1-36631c3c113f to disappear May 24 01:07:29.344: INFO: Pod pod-projected-configmaps-1691ba7d-d593-4b55-a1c1-36631c3c113f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:07:29.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8074" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":288,"completed":272,"skipped":4411,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:07:29.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-c02b46f4-b842-42d3-b4cf-8bd7c96e5e99 STEP: Creating a pod to test consume secrets May 24 01:07:29.422: INFO: Waiting up to 5m0s for pod "pod-secrets-3e41da1c-3a71-4c95-884b-ee9a31159f1d" in namespace "secrets-5050" to be "Succeeded or Failed" May 24 01:07:29.472: INFO: Pod "pod-secrets-3e41da1c-3a71-4c95-884b-ee9a31159f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 50.503629ms May 24 01:07:31.519: INFO: Pod "pod-secrets-3e41da1c-3a71-4c95-884b-ee9a31159f1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096637162s May 24 01:07:33.522: INFO: Pod "pod-secrets-3e41da1c-3a71-4c95-884b-ee9a31159f1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10036223s STEP: Saw pod success May 24 01:07:33.522: INFO: Pod "pod-secrets-3e41da1c-3a71-4c95-884b-ee9a31159f1d" satisfied condition "Succeeded or Failed" May 24 01:07:33.525: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-3e41da1c-3a71-4c95-884b-ee9a31159f1d container secret-volume-test: STEP: delete the pod May 24 01:07:33.571: INFO: Waiting for pod pod-secrets-3e41da1c-3a71-4c95-884b-ee9a31159f1d to disappear May 24 01:07:33.581: INFO: Pod pod-secrets-3e41da1c-3a71-4c95-884b-ee9a31159f1d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:07:33.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5050" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":273,"skipped":4422,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:07:33.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 24 01:07:33.703: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d10900f-4780-4002-ba9e-f21f79a2c769" in namespace "projected-8969" to be "Succeeded or Failed" May 24 01:07:33.707: INFO: Pod "downwardapi-volume-6d10900f-4780-4002-ba9e-f21f79a2c769": Phase="Pending", Reason="", readiness=false. Elapsed: 3.75425ms May 24 01:07:35.739: INFO: Pod "downwardapi-volume-6d10900f-4780-4002-ba9e-f21f79a2c769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036570503s May 24 01:07:37.743: INFO: Pod "downwardapi-volume-6d10900f-4780-4002-ba9e-f21f79a2c769": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040616092s STEP: Saw pod success May 24 01:07:37.743: INFO: Pod "downwardapi-volume-6d10900f-4780-4002-ba9e-f21f79a2c769" satisfied condition "Succeeded or Failed" May 24 01:07:37.746: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-6d10900f-4780-4002-ba9e-f21f79a2c769 container client-container: STEP: delete the pod May 24 01:07:37.818: INFO: Waiting for pod downwardapi-volume-6d10900f-4780-4002-ba9e-f21f79a2c769 to disappear May 24 01:07:37.874: INFO: Pod downwardapi-volume-6d10900f-4780-4002-ba9e-f21f79a2c769 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:07:37.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8969" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":288,"completed":274,"skipped":4422,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:07:37.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 24 01:07:38.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-347c9933-0934-4bfc-8bfe-141241e1ded3" in namespace "downward-api-4446" to be "Succeeded or Failed" May 24 01:07:38.259: INFO: Pod "downwardapi-volume-347c9933-0934-4bfc-8bfe-141241e1ded3": Phase="Pending", Reason="", readiness=false. Elapsed: 28.659573ms May 24 01:07:40.263: INFO: Pod "downwardapi-volume-347c9933-0934-4bfc-8bfe-141241e1ded3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033504624s May 24 01:07:42.267: INFO: Pod "downwardapi-volume-347c9933-0934-4bfc-8bfe-141241e1ded3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037080563s STEP: Saw pod success May 24 01:07:42.267: INFO: Pod "downwardapi-volume-347c9933-0934-4bfc-8bfe-141241e1ded3" satisfied condition "Succeeded or Failed" May 24 01:07:42.270: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-347c9933-0934-4bfc-8bfe-141241e1ded3 container client-container: STEP: delete the pod May 24 01:07:42.430: INFO: Waiting for pod downwardapi-volume-347c9933-0934-4bfc-8bfe-141241e1ded3 to disappear May 24 01:07:42.445: INFO: Pod downwardapi-volume-347c9933-0934-4bfc-8bfe-141241e1ded3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:07:42.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4446" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":288,"completed":275,"skipped":4449,"failed":0} ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:07:42.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-6ead9934-d254-4d61-b598-8fcaf69a1512 in namespace container-probe-4157 May 24 01:07:46.707: INFO: Started pod busybox-6ead9934-d254-4d61-b598-8fcaf69a1512 in namespace container-probe-4157 STEP: checking the pod's current state and verifying that restartCount is present May 24 01:07:46.710: INFO: Initial restart count of pod busybox-6ead9934-d254-4d61-b598-8fcaf69a1512 is 0 May 24 01:08:38.870: INFO: Restart count of pod container-probe-4157/busybox-6ead9934-d254-4d61-b598-8fcaf69a1512 is now 1 (52.159824114s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:08:38.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4157" for this suite. • [SLOW TEST:56.405 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":288,"completed":276,"skipped":4449,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:08:38.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-50af90e4-4e9f-42f1-93ad-d843650937e2 STEP: Creating a pod to test consume secrets May 24 01:08:39.065: INFO: Waiting up to 5m0s for pod "pod-secrets-c3020695-c001-4202-a14a-cc65758d442a" in namespace "secrets-4761" to be "Succeeded or Failed" May 24 01:08:39.097: INFO: Pod "pod-secrets-c3020695-c001-4202-a14a-cc65758d442a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.662648ms May 24 01:08:41.106: INFO: Pod "pod-secrets-c3020695-c001-4202-a14a-cc65758d442a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040962273s May 24 01:08:43.111: INFO: Pod "pod-secrets-c3020695-c001-4202-a14a-cc65758d442a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04587985s STEP: Saw pod success May 24 01:08:43.111: INFO: Pod "pod-secrets-c3020695-c001-4202-a14a-cc65758d442a" satisfied condition "Succeeded or Failed" May 24 01:08:43.115: INFO: Trying to get logs from node latest-worker pod pod-secrets-c3020695-c001-4202-a14a-cc65758d442a container secret-volume-test: STEP: delete the pod May 24 01:08:43.184: INFO: Waiting for pod pod-secrets-c3020695-c001-4202-a14a-cc65758d442a to disappear May 24 01:08:43.196: INFO: Pod pod-secrets-c3020695-c001-4202-a14a-cc65758d442a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:08:43.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4761" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":288,"completed":277,"skipped":4463,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:08:43.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:08:43.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7320" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":288,"completed":278,"skipped":4464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:08:43.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:179 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod May 24 01:08:47.490: INFO: Pod pod-hostip-fd4032e2-a735-4079-adb0-4b068d7773ab has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:08:47.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3829" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":288,"completed":279,"skipped":4526,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:08:47.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container May 24 01:08:52.080: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7626 pod-service-account-a439bc79-78c5-44b5-9207-4f3c108591da -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 24 01:08:52.334: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7626 pod-service-account-a439bc79-78c5-44b5-9207-4f3c108591da -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 24 01:08:52.528: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7626 pod-service-account-a439bc79-78c5-44b5-9207-4f3c108591da -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:08:52.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7626" for this suite. • [SLOW TEST:5.400 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":288,"completed":280,"skipped":4627,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:08:52.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 May 24 01:08:53.108: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 24 01:08:55.187: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:08:56.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9871" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":288,"completed":281,"skipped":4642,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:08:56.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-6b4183e3-a16b-4c2b-b8e7-05cf659eec5c STEP: Creating a pod to test consume secrets May 24 01:08:57.431: INFO: Waiting up to 5m0s for pod "pod-secrets-6f7cde50-49a4-4891-b24e-f33af721d0d2" in namespace "secrets-3082" to be "Succeeded or Failed" May 24 01:08:57.627: INFO: Pod "pod-secrets-6f7cde50-49a4-4891-b24e-f33af721d0d2": Phase="Pending", Reason="", readiness=false. Elapsed: 195.232047ms May 24 01:08:59.631: INFO: Pod "pod-secrets-6f7cde50-49a4-4891-b24e-f33af721d0d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199543074s May 24 01:09:01.634: INFO: Pod "pod-secrets-6f7cde50-49a4-4891-b24e-f33af721d0d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.202773757s STEP: Saw pod success May 24 01:09:01.634: INFO: Pod "pod-secrets-6f7cde50-49a4-4891-b24e-f33af721d0d2" satisfied condition "Succeeded or Failed" May 24 01:09:01.637: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-6f7cde50-49a4-4891-b24e-f33af721d0d2 container secret-volume-test: STEP: delete the pod May 24 01:09:01.670: INFO: Waiting for pod pod-secrets-6f7cde50-49a4-4891-b24e-f33af721d0d2 to disappear May 24 01:09:01.685: INFO: Pod pod-secrets-6f7cde50-49a4-4891-b24e-f33af721d0d2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:09:01.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3082" for this suite. STEP: Destroying namespace "secret-namespace-3580" for this suite. • [SLOW TEST:5.567 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":288,"completed":282,"skipped":4647,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:09:01.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 24 01:09:05.208: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:09:05.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4112" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":288,"completed":283,"skipped":4661,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:09:05.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs May 24 01:09:05.720: INFO: Waiting up to 5m0s for pod "pod-d8bcc8c2-f831-4f1e-a453-2cc18510d0ad" in namespace "emptydir-3392" to be "Succeeded or Failed" May 24 01:09:05.735: INFO: Pod "pod-d8bcc8c2-f831-4f1e-a453-2cc18510d0ad": Phase="Pending", Reason="", readiness=false. Elapsed: 15.630335ms May 24 01:09:07.748: INFO: Pod "pod-d8bcc8c2-f831-4f1e-a453-2cc18510d0ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028383806s May 24 01:09:09.764: INFO: Pod "pod-d8bcc8c2-f831-4f1e-a453-2cc18510d0ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044794179s STEP: Saw pod success May 24 01:09:09.764: INFO: Pod "pod-d8bcc8c2-f831-4f1e-a453-2cc18510d0ad" satisfied condition "Succeeded or Failed" May 24 01:09:09.772: INFO: Trying to get logs from node latest-worker2 pod pod-d8bcc8c2-f831-4f1e-a453-2cc18510d0ad container test-container: STEP: delete the pod May 24 01:09:09.814: INFO: Waiting for pod pod-d8bcc8c2-f831-4f1e-a453-2cc18510d0ad to disappear May 24 01:09:09.825: INFO: Pod pod-d8bcc8c2-f831-4f1e-a453-2cc18510d0ad no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:09:09.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3392" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":288,"completed":284,"skipped":4662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:09:09.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0524 01:09:22.257677 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 24 01:09:22.257: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:09:22.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9553" for this suite. • [SLOW TEST:12.862 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":288,"completed":285,"skipped":4709,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:09:22.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-7566 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7566 STEP: Deleting pre-stop pod May 24 01:09:38.575: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:09:38.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7566" for this suite. • [SLOW TEST:15.980 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":288,"completed":286,"skipped":4729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:09:38.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 24 01:09:47.074: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 01:09:47.123: INFO: Pod pod-with-poststart-exec-hook still exists May 24 01:09:49.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 01:09:49.128: INFO: Pod pod-with-poststart-exec-hook still exists May 24 01:09:51.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 01:09:51.129: INFO: Pod pod-with-poststart-exec-hook still exists May 24 01:09:53.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 01:09:53.128: INFO: Pod pod-with-poststart-exec-hook still exists May 24 01:09:55.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 01:09:55.128: INFO: Pod pod-with-poststart-exec-hook still exists May 24 01:09:57.124: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 24 01:09:57.128: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:09:57.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4462" for this suite. • [SLOW TEST:18.461 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":288,"completed":287,"skipped":4789,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client May 24 01:09:57.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin May 24 01:09:57.289: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6884f742-e203-40e3-bc5e-d73e8ed6ad37" in namespace "downward-api-7814" to be "Succeeded or Failed" May 24 01:09:57.305: INFO: Pod "downwardapi-volume-6884f742-e203-40e3-bc5e-d73e8ed6ad37": Phase="Pending", Reason="", readiness=false. Elapsed: 15.503675ms May 24 01:09:59.411: INFO: Pod "downwardapi-volume-6884f742-e203-40e3-bc5e-d73e8ed6ad37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121887272s May 24 01:10:01.426: INFO: Pod "downwardapi-volume-6884f742-e203-40e3-bc5e-d73e8ed6ad37": Phase="Running", Reason="", readiness=true. Elapsed: 4.136316293s May 24 01:10:03.430: INFO: Pod "downwardapi-volume-6884f742-e203-40e3-bc5e-d73e8ed6ad37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.140634399s STEP: Saw pod success May 24 01:10:03.430: INFO: Pod "downwardapi-volume-6884f742-e203-40e3-bc5e-d73e8ed6ad37" satisfied condition "Succeeded or Failed" May 24 01:10:03.433: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6884f742-e203-40e3-bc5e-d73e8ed6ad37 container client-container: STEP: delete the pod May 24 01:10:03.472: INFO: Waiting for pod downwardapi-volume-6884f742-e203-40e3-bc5e-d73e8ed6ad37 to disappear May 24 01:10:03.484: INFO: Pod downwardapi-volume-6884f742-e203-40e3-bc5e-d73e8ed6ad37 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 May 24 01:10:03.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7814" for this suite. • [SLOW TEST:6.353 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":288,"completed":288,"skipped":4796,"failed":0} SSSSSSSSSSSMay 24 01:10:03.492: INFO: Running AfterSuite actions on all nodes May 24 01:10:03.492: INFO: Running AfterSuite actions on node 1 May 24 01:10:03.492: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":288,"completed":288,"skipped":4807,"failed":0} Ran 288 of 5095 Specs in 5525.909 seconds SUCCESS! -- 288 Passed | 0 Failed | 0 Pending | 4807 Skipped PASS