I0809 23:20:20.593404 8 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0809 23:20:20.593662 8 e2e.go:129] Starting e2e run "b084a0d4-e762-408a-9ee2-94ee5fd82e54" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597015219 - Will randomize all specs Will run 303 of 5238 specs Aug 9 23:20:20.656: INFO: >>> kubeConfig: /root/.kube/config Aug 9 23:20:20.658: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 9 23:20:20.685: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 9 23:20:20.719: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 9 23:20:20.719: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 9 23:20:20.719: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 9 23:20:20.726: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 9 23:20:20.726: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 9 23:20:20.726: INFO: e2e test version: v1.20.0-alpha.0.523+97c5f1f7632f2d Aug 9 23:20:20.727: INFO: kube-apiserver version: v1.19.0-rc.1 Aug 9 23:20:20.727: INFO: >>> kubeConfig: /root/.kube/config Aug 9 23:20:20.731: INFO: Cluster IP family: ipv4 S ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:20:20.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress Aug 9 23:20:20.864: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 9 23:20:20.899: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Aug 9 23:20:20.904: INFO: starting watch STEP: patching STEP: updating Aug 9 23:20:21.077: INFO: waiting for watch events with expected annotations Aug 9 23:20:21.077: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:20:21.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-7564" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":1,"skipped":1,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:20:21.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:20:34.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9187" for this suite. • [SLOW TEST:13.242 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":2,"skipped":11,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:20:34.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:20:34.658: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-fbf44e36-cabf-4b8b-8a02-68fe00b6e3cd" in namespace "security-context-test-8134" to be "Succeeded or Failed" Aug 9 23:20:34.664: INFO: Pod "busybox-privileged-false-fbf44e36-cabf-4b8b-8a02-68fe00b6e3cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.648572ms Aug 9 23:20:36.900: INFO: Pod "busybox-privileged-false-fbf44e36-cabf-4b8b-8a02-68fe00b6e3cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241769074s Aug 9 23:20:38.904: INFO: Pod "busybox-privileged-false-fbf44e36-cabf-4b8b-8a02-68fe00b6e3cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.246194236s Aug 9 23:20:38.904: INFO: Pod "busybox-privileged-false-fbf44e36-cabf-4b8b-8a02-68fe00b6e3cd" satisfied condition "Succeeded or Failed" Aug 9 23:20:38.945: INFO: Got logs for pod "busybox-privileged-false-fbf44e36-cabf-4b8b-8a02-68fe00b6e3cd": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:20:38.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8134" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":3,"skipped":75,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:20:38.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:20:39.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4736" for this suite. STEP: Destroying namespace "nspatchtest-5ec15eca-e6d7-4ed2-b7ee-7e7cb685c2c1-9805" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":4,"skipped":86,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:20:39.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 9 23:20:49.464: INFO: 0 pods remaining Aug 9 23:20:49.464: INFO: 0 pods has nil DeletionTimestamp Aug 9 23:20:49.464: INFO: Aug 9 23:20:50.591: INFO: 0 pods remaining Aug 9 23:20:50.591: INFO: 0 pods has nil DeletionTimestamp Aug 9 23:20:50.591: INFO: Aug 9 23:20:51.526: INFO: 0 pods remaining Aug 9 23:20:51.526: INFO: 0 pods has nil DeletionTimestamp Aug 9 23:20:51.526: INFO: STEP: Gathering metrics W0809 23:20:52.682536 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 9 23:21:54.714: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:21:54.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7068" for this suite. • [SLOW TEST:74.838 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":5,"skipped":98,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:21:54.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-5e470530-f6d9-49bc-88b8-7d4df6f0d189 STEP: Creating configMap with name cm-test-opt-upd-009c2c52-fe3a-4481-b04d-117639ea32dd STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5e470530-f6d9-49bc-88b8-7d4df6f0d189 STEP: Updating configmap cm-test-opt-upd-009c2c52-fe3a-4481-b04d-117639ea32dd STEP: Creating configMap with name cm-test-opt-create-99064ba1-1fd5-4a32-bbad-1eb84c45742c STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:22:04.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8474" for this suite. • [SLOW TEST:10.257 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":6,"skipped":107,"failed":0} SSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:22:04.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-872 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-872 to expose endpoints map[] Aug 9 23:22:05.183: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Aug 9 23:22:06.192: INFO: successfully validated that service multi-endpoint-test in namespace services-872 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-872 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-872 to expose endpoints map[pod1:[100]] Aug 9 23:22:10.260: INFO: successfully validated that service multi-endpoint-test in namespace services-872 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-872 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-872 to expose endpoints map[pod1:[100] pod2:[101]] Aug 9 23:22:14.321: INFO: Unexpected endpoints: found map[b49a5eec-97fe-48aa-98a5-25147850276e:[100]], expected map[pod1:[100] pod2:[101]], will retry Aug 9 23:22:15.325: INFO: successfully validated that service multi-endpoint-test in namespace services-872 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-872 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-872 to expose endpoints map[pod2:[101]] Aug 9 23:22:15.428: INFO: successfully validated that service multi-endpoint-test in namespace services-872 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-872 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-872 to expose endpoints map[] Aug 9 23:22:17.043: INFO: successfully validated that service multi-endpoint-test in namespace services-872 exposes endpoints map[] [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:22:17.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-872" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:13.673 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":7,"skipped":113,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:22:18.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-0ee8a162-5d42-44b7-ac6c-4dc08aa3d84c STEP: Creating secret with name s-test-opt-upd-da039d76-e574-496c-b60b-07586bf13ec6 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0ee8a162-5d42-44b7-ac6c-4dc08aa3d84c STEP: Updating secret s-test-opt-upd-da039d76-e574-496c-b60b-07586bf13ec6 STEP: Creating secret with name s-test-opt-create-9bb07403-e0c6-40be-8410-0fb34cae534d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:23:58.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3315" for this suite. • [SLOW TEST:99.492 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":8,"skipped":123,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:23:58.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:23:58.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6022" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":9,"skipped":129,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:23:58.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 9 23:23:58.313: INFO: Waiting up to 5m0s for pod "downwardapi-volume-428cd243-f7e8-4d9c-80ce-d5248335162a" in namespace "projected-1939" to be "Succeeded or Failed" Aug 9 23:23:58.316: INFO: Pod "downwardapi-volume-428cd243-f7e8-4d9c-80ce-d5248335162a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.555887ms Aug 9 23:24:00.320: INFO: Pod "downwardapi-volume-428cd243-f7e8-4d9c-80ce-d5248335162a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00662251s Aug 9 23:24:02.335: INFO: Pod "downwardapi-volume-428cd243-f7e8-4d9c-80ce-d5248335162a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021401326s Aug 9 23:24:04.339: INFO: Pod "downwardapi-volume-428cd243-f7e8-4d9c-80ce-d5248335162a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025091838s STEP: Saw pod success Aug 9 23:24:04.339: INFO: Pod "downwardapi-volume-428cd243-f7e8-4d9c-80ce-d5248335162a" satisfied condition "Succeeded or Failed" Aug 9 23:24:04.359: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-428cd243-f7e8-4d9c-80ce-d5248335162a container client-container: STEP: delete the pod Aug 9 23:24:04.434: INFO: Waiting for pod downwardapi-volume-428cd243-f7e8-4d9c-80ce-d5248335162a to disappear Aug 9 23:24:04.454: INFO: Pod downwardapi-volume-428cd243-f7e8-4d9c-80ce-d5248335162a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:24:04.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1939" for this suite. • [SLOW TEST:6.241 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":10,"skipped":132,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:24:04.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Aug 9 23:24:04.607: INFO: Waiting up to 5m0s for pod "client-containers-323140b0-7826-44ff-8f46-a59ecdd42d3a" in namespace "containers-1855" to be "Succeeded or Failed" Aug 9 23:24:04.772: INFO: Pod "client-containers-323140b0-7826-44ff-8f46-a59ecdd42d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 164.824411ms Aug 9 23:24:06.819: INFO: Pod "client-containers-323140b0-7826-44ff-8f46-a59ecdd42d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211947451s Aug 9 23:24:08.824: INFO: Pod "client-containers-323140b0-7826-44ff-8f46-a59ecdd42d3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.216243944s STEP: Saw pod success Aug 9 23:24:08.824: INFO: Pod "client-containers-323140b0-7826-44ff-8f46-a59ecdd42d3a" satisfied condition "Succeeded or Failed" Aug 9 23:24:08.826: INFO: Trying to get logs from node latest-worker pod client-containers-323140b0-7826-44ff-8f46-a59ecdd42d3a container test-container: STEP: delete the pod Aug 9 23:24:08.959: INFO: Waiting for pod client-containers-323140b0-7826-44ff-8f46-a59ecdd42d3a to disappear Aug 9 23:24:08.982: INFO: Pod client-containers-323140b0-7826-44ff-8f46-a59ecdd42d3a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:24:08.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1855" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":11,"skipped":156,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:24:08.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-95c605a2-a69f-4e03-bcf8-7b586a367995 STEP: Creating a pod to test consume configMaps Aug 9 23:24:09.145: INFO: Waiting up to 5m0s for pod "pod-configmaps-18974c80-d888-4983-98bd-4e0d8e497428" in namespace "configmap-9707" to be "Succeeded or Failed" Aug 9 23:24:09.163: INFO: Pod "pod-configmaps-18974c80-d888-4983-98bd-4e0d8e497428": Phase="Pending", Reason="", readiness=false. Elapsed: 18.135121ms Aug 9 23:24:11.216: INFO: Pod "pod-configmaps-18974c80-d888-4983-98bd-4e0d8e497428": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070591368s Aug 9 23:24:13.299: INFO: Pod "pod-configmaps-18974c80-d888-4983-98bd-4e0d8e497428": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.153915566s STEP: Saw pod success Aug 9 23:24:13.299: INFO: Pod "pod-configmaps-18974c80-d888-4983-98bd-4e0d8e497428" satisfied condition "Succeeded or Failed" Aug 9 23:24:13.317: INFO: Trying to get logs from node latest-worker pod pod-configmaps-18974c80-d888-4983-98bd-4e0d8e497428 container configmap-volume-test: STEP: delete the pod Aug 9 23:24:13.390: INFO: Waiting for pod pod-configmaps-18974c80-d888-4983-98bd-4e0d8e497428 to disappear Aug 9 23:24:13.394: INFO: Pod pod-configmaps-18974c80-d888-4983-98bd-4e0d8e497428 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:24:13.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9707" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":12,"skipped":157,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:24:13.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Aug 9 23:24:17.521: INFO: Pod pod-hostip-50407556-c53d-4c70-a2ee-95d3da357e29 has hostIP: 172.18.0.14 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:24:17.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7692" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":13,"skipped":163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:24:17.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1043/configmap-test-7163b053-b1a6-4efe-8777-9e6a0bab7cff STEP: Creating a pod to test consume configMaps Aug 9 23:24:17.655: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8678023-9753-41de-842e-bccf532460d6" in namespace "configmap-1043" to be "Succeeded or Failed" Aug 9 23:24:17.659: INFO: Pod "pod-configmaps-c8678023-9753-41de-842e-bccf532460d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.907162ms Aug 9 23:24:19.663: INFO: Pod "pod-configmaps-c8678023-9753-41de-842e-bccf532460d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007707579s Aug 9 23:24:21.674: INFO: Pod "pod-configmaps-c8678023-9753-41de-842e-bccf532460d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018514996s STEP: Saw pod success Aug 9 23:24:21.674: INFO: Pod "pod-configmaps-c8678023-9753-41de-842e-bccf532460d6" satisfied condition "Succeeded or Failed" Aug 9 23:24:21.676: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-c8678023-9753-41de-842e-bccf532460d6 container env-test: STEP: delete the pod Aug 9 23:24:21.737: INFO: Waiting for pod pod-configmaps-c8678023-9753-41de-842e-bccf532460d6 to disappear Aug 9 23:24:21.743: INFO: Pod pod-configmaps-c8678023-9753-41de-842e-bccf532460d6 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:24:21.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1043" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":14,"skipped":195,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:24:21.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 9 23:24:23.189: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 9 23:24:25.196: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612263, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612263, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612263, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612263, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 9 23:24:27.200: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612263, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612263, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612263, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612263, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 9 23:24:30.252: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:24:30.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4899" for this suite. STEP: Destroying namespace "webhook-4899-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.722 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":15,"skipped":205,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:24:30.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 9 23:24:30.627: INFO: Waiting up to 5m0s for pod "downward-api-81daaba6-4b27-438d-b270-24b94ad6a8b9" in namespace "downward-api-8373" to be "Succeeded or Failed" Aug 9 23:24:30.654: INFO: Pod "downward-api-81daaba6-4b27-438d-b270-24b94ad6a8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 27.135714ms Aug 9 23:24:32.658: INFO: Pod "downward-api-81daaba6-4b27-438d-b270-24b94ad6a8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031098715s Aug 9 23:24:34.662: INFO: Pod "downward-api-81daaba6-4b27-438d-b270-24b94ad6a8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034929889s Aug 9 23:24:36.707: INFO: Pod "downward-api-81daaba6-4b27-438d-b270-24b94ad6a8b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07973161s STEP: Saw pod success Aug 9 23:24:36.707: INFO: Pod "downward-api-81daaba6-4b27-438d-b270-24b94ad6a8b9" satisfied condition "Succeeded or Failed" Aug 9 23:24:36.743: INFO: Trying to get logs from node latest-worker2 pod downward-api-81daaba6-4b27-438d-b270-24b94ad6a8b9 container dapi-container: STEP: delete the pod Aug 9 23:24:36.876: INFO: Waiting for pod downward-api-81daaba6-4b27-438d-b270-24b94ad6a8b9 to disappear Aug 9 23:24:36.899: INFO: Pod downward-api-81daaba6-4b27-438d-b270-24b94ad6a8b9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:24:36.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8373" for this suite. • [SLOW TEST:6.425 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":16,"skipped":209,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:24:36.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Aug 9 23:24:37.014: INFO: created test-event-1 Aug 9 23:24:37.025: INFO: created test-event-2 Aug 9 23:24:37.044: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Aug 9 23:24:37.077: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Aug 9 23:24:37.098: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:24:37.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7212" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":17,"skipped":218,"failed":0} SSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:24:37.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Aug 9 23:24:37.200: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Aug 9 23:24:37.206: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 9 23:24:37.206: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Aug 9 23:24:37.212: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 9 23:24:37.212: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Aug 9 23:24:37.331: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Aug 9 23:24:37.331: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Aug 9 23:24:44.697: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:24:44.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-1785" for this suite. • [SLOW TEST:7.610 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":18,"skipped":223,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:24:44.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Aug 9 23:24:44.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3664' Aug 9 23:24:49.166: INFO: stderr: "" Aug 9 23:24:49.166: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 9 23:24:50.252: INFO: Selector matched 1 pods for map[app:agnhost] Aug 9 23:24:50.252: INFO: Found 0 / 1 Aug 9 23:24:51.283: INFO: Selector matched 1 pods for map[app:agnhost] Aug 9 23:24:51.283: INFO: Found 0 / 1 Aug 9 23:24:52.171: INFO: Selector matched 1 pods for map[app:agnhost] Aug 9 23:24:52.171: INFO: Found 0 / 1 Aug 9 23:24:53.171: INFO: Selector matched 1 pods for map[app:agnhost] Aug 9 23:24:53.171: INFO: Found 0 / 1 Aug 9 23:24:54.192: INFO: Selector matched 1 pods for map[app:agnhost] Aug 9 23:24:54.192: INFO: Found 1 / 1 Aug 9 23:24:54.192: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 9 23:24:54.200: INFO: Selector matched 1 pods for map[app:agnhost] Aug 9 23:24:54.200: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 9 23:24:54.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config patch pod agnhost-primary-wwnrz --namespace=kubectl-3664 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 9 23:24:54.363: INFO: stderr: "" Aug 9 23:24:54.363: INFO: stdout: "pod/agnhost-primary-wwnrz patched\n" STEP: checking annotations Aug 9 23:24:54.387: INFO: Selector matched 1 pods for map[app:agnhost] Aug 9 23:24:54.387: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:24:54.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3664" for this suite. • [SLOW TEST:9.669 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":19,"skipped":245,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:24:54.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-bb5b4435-f1cc-4a1d-8e9a-b9b1d4b54912 in namespace container-probe-8620 Aug 9 23:24:58.553: INFO: Started pod busybox-bb5b4435-f1cc-4a1d-8e9a-b9b1d4b54912 in namespace container-probe-8620 STEP: checking the pod's current state and verifying that restartCount is present Aug 9 23:24:58.556: INFO: Initial restart count of pod busybox-bb5b4435-f1cc-4a1d-8e9a-b9b1d4b54912 is 0 Aug 9 23:25:54.722: INFO: Restart count of pod container-probe-8620/busybox-bb5b4435-f1cc-4a1d-8e9a-b9b1d4b54912 is now 1 (56.166161748s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:25:54.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8620" for this suite. • [SLOW TEST:60.347 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":20,"skipped":246,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:25:54.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-65228dcf-cc10-432c-915a-5bb2b77877e7 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:25:54.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-337" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":21,"skipped":252,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:25:54.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:25:54.928: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:25:55.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2391" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":22,"skipped":263,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:25:55.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Aug 9 23:25:56.061: INFO: created test-pod-1 Aug 9 23:25:56.083: INFO: created test-pod-2 Aug 9 23:25:56.113: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:25:56.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6368" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":23,"skipped":278,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:25:56.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:25:56.561: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-4ce4825f-b393-4f42-96c6-a4053cee94c0" in namespace "security-context-test-3598" to be "Succeeded or Failed" Aug 9 23:25:56.573: INFO: Pod "busybox-readonly-false-4ce4825f-b393-4f42-96c6-a4053cee94c0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.024019ms Aug 9 23:25:58.576: INFO: Pod "busybox-readonly-false-4ce4825f-b393-4f42-96c6-a4053cee94c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0152657s Aug 9 23:26:00.709: INFO: Pod "busybox-readonly-false-4ce4825f-b393-4f42-96c6-a4053cee94c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148591396s Aug 9 23:26:02.713: INFO: Pod "busybox-readonly-false-4ce4825f-b393-4f42-96c6-a4053cee94c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.152741883s Aug 9 23:26:02.713: INFO: Pod "busybox-readonly-false-4ce4825f-b393-4f42-96c6-a4053cee94c0" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:26:02.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3598" for this suite. • [SLOW TEST:6.302 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":24,"skipped":298,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:26:02.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1982 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1982 STEP: creating replication controller externalsvc in namespace services-1982 I0809 23:26:02.969860 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1982, replica count: 2 I0809 23:26:06.020195 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0809 23:26:09.020487 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Aug 9 23:26:09.060: INFO: Creating new exec pod Aug 9 23:26:13.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1982 execpodv49cw -- /bin/sh -x -c nslookup clusterip-service.services-1982.svc.cluster.local' Aug 9 23:26:13.362: INFO: stderr: "I0809 23:26:13.243852 67 log.go:181] (0xc0005c2dc0) (0xc0009ac3c0) Create stream\nI0809 23:26:13.243911 67 log.go:181] (0xc0005c2dc0) (0xc0009ac3c0) Stream added, broadcasting: 1\nI0809 23:26:13.248576 67 log.go:181] (0xc0005c2dc0) Reply frame received for 1\nI0809 23:26:13.248608 67 log.go:181] (0xc0005c2dc0) (0xc0009a2640) Create stream\nI0809 23:26:13.248617 67 log.go:181] (0xc0005c2dc0) (0xc0009a2640) Stream added, broadcasting: 3\nI0809 23:26:13.249571 67 log.go:181] (0xc0005c2dc0) Reply frame received for 3\nI0809 23:26:13.249627 67 log.go:181] (0xc0005c2dc0) (0xc0007fc0a0) Create stream\nI0809 23:26:13.249644 67 log.go:181] (0xc0005c2dc0) (0xc0007fc0a0) Stream added, broadcasting: 5\nI0809 23:26:13.250397 67 log.go:181] (0xc0005c2dc0) Reply frame received for 5\nI0809 23:26:13.346826 67 log.go:181] (0xc0005c2dc0) Data frame received for 5\nI0809 23:26:13.346859 67 log.go:181] (0xc0007fc0a0) (5) Data frame handling\nI0809 23:26:13.346882 67 log.go:181] (0xc0007fc0a0) (5) Data frame sent\n+ nslookup clusterip-service.services-1982.svc.cluster.local\nI0809 23:26:13.354037 67 log.go:181] (0xc0005c2dc0) Data frame received for 3\nI0809 23:26:13.354067 67 log.go:181] (0xc0009a2640) (3) Data frame handling\nI0809 23:26:13.354093 67 log.go:181] (0xc0009a2640) (3) Data frame sent\nI0809 23:26:13.355134 67 log.go:181] (0xc0005c2dc0) Data frame received for 3\nI0809 23:26:13.355151 67 log.go:181] (0xc0009a2640) (3) Data frame handling\nI0809 23:26:13.355164 67 log.go:181] (0xc0009a2640) (3) Data frame sent\nI0809 23:26:13.355482 67 log.go:181] (0xc0005c2dc0) Data frame received for 5\nI0809 23:26:13.355502 67 log.go:181] (0xc0007fc0a0) (5) Data frame handling\nI0809 23:26:13.355529 67 log.go:181] (0xc0005c2dc0) Data frame received for 3\nI0809 23:26:13.355554 67 log.go:181] (0xc0009a2640) (3) Data frame handling\nI0809 23:26:13.357406 67 log.go:181] (0xc0005c2dc0) Data frame received for 1\nI0809 23:26:13.357432 67 log.go:181] (0xc0009ac3c0) (1) Data frame handling\nI0809 23:26:13.357448 67 log.go:181] (0xc0009ac3c0) (1) Data frame sent\nI0809 23:26:13.357471 67 log.go:181] (0xc0005c2dc0) (0xc0009ac3c0) Stream removed, broadcasting: 1\nI0809 23:26:13.357494 67 log.go:181] (0xc0005c2dc0) Go away received\nI0809 23:26:13.357799 67 log.go:181] (0xc0005c2dc0) (0xc0009ac3c0) Stream removed, broadcasting: 1\nI0809 23:26:13.357815 67 log.go:181] (0xc0005c2dc0) (0xc0009a2640) Stream removed, broadcasting: 3\nI0809 23:26:13.357820 67 log.go:181] (0xc0005c2dc0) (0xc0007fc0a0) Stream removed, broadcasting: 5\n" Aug 9 23:26:13.363: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1982.svc.cluster.local\tcanonical name = externalsvc.services-1982.svc.cluster.local.\nName:\texternalsvc.services-1982.svc.cluster.local\nAddress: 10.100.82.24\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1982, will wait for the garbage collector to delete the pods Aug 9 23:26:13.435: INFO: Deleting ReplicationController externalsvc took: 18.930557ms Aug 9 23:26:13.835: INFO: Terminating ReplicationController externalsvc pods took: 400.22265ms Aug 9 23:26:24.318: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:26:24.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1982" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:21.709 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":25,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:26:24.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:26:24.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-328" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":26,"skipped":320,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:26:24.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-f23e12f3-9273-496c-a97d-e03410ecb5ac STEP: Creating a pod to test consume secrets Aug 9 23:26:24.726: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9949c095-8f00-4c99-8196-55ca6e71559a" in namespace "projected-5853" to be "Succeeded or Failed" Aug 9 23:26:24.804: INFO: Pod "pod-projected-secrets-9949c095-8f00-4c99-8196-55ca6e71559a": Phase="Pending", Reason="", readiness=false. Elapsed: 77.963489ms Aug 9 23:26:26.808: INFO: Pod "pod-projected-secrets-9949c095-8f00-4c99-8196-55ca6e71559a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08195595s Aug 9 23:26:28.813: INFO: Pod "pod-projected-secrets-9949c095-8f00-4c99-8196-55ca6e71559a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086705697s STEP: Saw pod success Aug 9 23:26:28.813: INFO: Pod "pod-projected-secrets-9949c095-8f00-4c99-8196-55ca6e71559a" satisfied condition "Succeeded or Failed" Aug 9 23:26:28.815: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-9949c095-8f00-4c99-8196-55ca6e71559a container secret-volume-test: STEP: delete the pod Aug 9 23:26:28.863: INFO: Waiting for pod pod-projected-secrets-9949c095-8f00-4c99-8196-55ca6e71559a to disappear Aug 9 23:26:28.874: INFO: Pod pod-projected-secrets-9949c095-8f00-4c99-8196-55ca6e71559a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:26:28.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5853" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":27,"skipped":320,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:26:28.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1330.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1330.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1330.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1330.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1330.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1330.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1330.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 159.147.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.147.159_udp@PTR;check="$$(dig +tcp +noall +answer +search 159.147.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.147.159_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1330.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1330.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1330.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1330.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1330.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1330.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1330.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1330.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 159.147.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.147.159_udp@PTR;check="$$(dig +tcp +noall +answer +search 159.147.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.147.159_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 9 23:26:37.130: INFO: Unable to read wheezy_udp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:37.134: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:37.137: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:37.140: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:37.163: INFO: Unable to read jessie_udp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:37.165: INFO: Unable to read jessie_tcp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:37.168: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:37.171: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:37.191: INFO: Lookups using dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566 failed for: [wheezy_udp@dns-test-service.dns-1330.svc.cluster.local wheezy_tcp@dns-test-service.dns-1330.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local jessie_udp@dns-test-service.dns-1330.svc.cluster.local jessie_tcp@dns-test-service.dns-1330.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local] Aug 9 23:26:42.196: INFO: Unable to read wheezy_udp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:42.202: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:42.205: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:42.207: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:42.230: INFO: Unable to read jessie_udp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:42.232: INFO: Unable to read jessie_tcp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:42.234: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:42.236: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:42.249: INFO: Lookups using dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566 failed for: [wheezy_udp@dns-test-service.dns-1330.svc.cluster.local wheezy_tcp@dns-test-service.dns-1330.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local jessie_udp@dns-test-service.dns-1330.svc.cluster.local jessie_tcp@dns-test-service.dns-1330.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local] Aug 9 23:26:47.196: INFO: Unable to read wheezy_udp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:47.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:47.203: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:47.206: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:47.226: INFO: Unable to read jessie_udp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:47.229: INFO: Unable to read jessie_tcp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:47.232: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:47.234: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:47.251: INFO: Lookups using dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566 failed for: [wheezy_udp@dns-test-service.dns-1330.svc.cluster.local wheezy_tcp@dns-test-service.dns-1330.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local jessie_udp@dns-test-service.dns-1330.svc.cluster.local jessie_tcp@dns-test-service.dns-1330.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local] Aug 9 23:26:52.196: INFO: Unable to read wheezy_udp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:52.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:52.203: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:52.209: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:52.228: INFO: Unable to read jessie_udp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:52.230: INFO: Unable to read jessie_tcp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:52.232: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:52.234: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:52.249: INFO: Lookups using dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566 failed for: [wheezy_udp@dns-test-service.dns-1330.svc.cluster.local wheezy_tcp@dns-test-service.dns-1330.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local jessie_udp@dns-test-service.dns-1330.svc.cluster.local jessie_tcp@dns-test-service.dns-1330.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local] Aug 9 23:26:57.309: INFO: Unable to read wheezy_udp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:57.314: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:57.316: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:57.318: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:57.642: INFO: Unable to read jessie_udp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:57.644: INFO: Unable to read jessie_tcp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:57.646: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:57.648: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:26:57.665: INFO: Lookups using dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566 failed for: [wheezy_udp@dns-test-service.dns-1330.svc.cluster.local wheezy_tcp@dns-test-service.dns-1330.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local jessie_udp@dns-test-service.dns-1330.svc.cluster.local jessie_tcp@dns-test-service.dns-1330.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local] Aug 9 23:27:02.196: INFO: Unable to read wheezy_udp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:27:02.199: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:27:02.203: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:27:02.206: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:27:02.227: INFO: Unable to read jessie_udp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:27:02.229: INFO: Unable to read jessie_tcp@dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:27:02.232: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:27:02.235: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local from pod dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566: the server could not find the requested resource (get pods dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566) Aug 9 23:27:02.252: INFO: Lookups using dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566 failed for: [wheezy_udp@dns-test-service.dns-1330.svc.cluster.local wheezy_tcp@dns-test-service.dns-1330.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local jessie_udp@dns-test-service.dns-1330.svc.cluster.local jessie_tcp@dns-test-service.dns-1330.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1330.svc.cluster.local] Aug 9 23:27:07.295: INFO: DNS probes using dns-1330/dns-test-7f299aeb-a3e7-433e-9d56-7e6090b29566 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:27:07.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1330" for this suite. • [SLOW TEST:39.282 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":28,"skipped":328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:27:08.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:27:25.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7963" for this suite. • [SLOW TEST:17.167 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":29,"skipped":433,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:27:25.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 9 23:27:25.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-159ccdac-b4ff-450e-a516-68c327361170" in namespace "projected-429" to be "Succeeded or Failed" Aug 9 23:27:25.480: INFO: Pod "downwardapi-volume-159ccdac-b4ff-450e-a516-68c327361170": Phase="Pending", Reason="", readiness=false. Elapsed: 20.827173ms Aug 9 23:27:27.500: INFO: Pod "downwardapi-volume-159ccdac-b4ff-450e-a516-68c327361170": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040164854s Aug 9 23:27:29.524: INFO: Pod "downwardapi-volume-159ccdac-b4ff-450e-a516-68c327361170": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06420631s STEP: Saw pod success Aug 9 23:27:29.524: INFO: Pod "downwardapi-volume-159ccdac-b4ff-450e-a516-68c327361170" satisfied condition "Succeeded or Failed" Aug 9 23:27:29.527: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-159ccdac-b4ff-450e-a516-68c327361170 container client-container: STEP: delete the pod Aug 9 23:27:29.603: INFO: Waiting for pod downwardapi-volume-159ccdac-b4ff-450e-a516-68c327361170 to disappear Aug 9 23:27:29.607: INFO: Pod downwardapi-volume-159ccdac-b4ff-450e-a516-68c327361170 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:27:29.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-429" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":30,"skipped":439,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:27:29.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-12247088-3c04-4e7d-a0a0-92941a12079c in namespace container-probe-6801 Aug 9 23:27:33.699: INFO: Started pod liveness-12247088-3c04-4e7d-a0a0-92941a12079c in namespace container-probe-6801 STEP: checking the pod's current state and verifying that restartCount is present Aug 9 23:27:33.702: INFO: Initial restart count of pod liveness-12247088-3c04-4e7d-a0a0-92941a12079c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:31:34.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6801" for this suite. • [SLOW TEST:245.115 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":31,"skipped":447,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:31:34.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-9lv7 STEP: Creating a pod to test atomic-volume-subpath Aug 9 23:31:35.033: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9lv7" in namespace "subpath-4065" to be "Succeeded or Failed" Aug 9 23:31:35.174: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Pending", Reason="", readiness=false. Elapsed: 141.3212ms Aug 9 23:31:37.178: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145142588s Aug 9 23:31:39.183: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Running", Reason="", readiness=true. Elapsed: 4.149616957s Aug 9 23:31:41.187: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Running", Reason="", readiness=true. Elapsed: 6.153597705s Aug 9 23:31:43.191: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Running", Reason="", readiness=true. Elapsed: 8.157736742s Aug 9 23:31:45.195: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Running", Reason="", readiness=true. Elapsed: 10.161950178s Aug 9 23:31:47.199: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Running", Reason="", readiness=true. Elapsed: 12.166378512s Aug 9 23:31:49.203: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Running", Reason="", readiness=true. Elapsed: 14.170427322s Aug 9 23:31:51.207: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Running", Reason="", readiness=true. Elapsed: 16.174162971s Aug 9 23:31:53.212: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Running", Reason="", readiness=true. Elapsed: 18.178643428s Aug 9 23:31:55.216: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Running", Reason="", readiness=true. Elapsed: 20.183341006s Aug 9 23:31:57.220: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Running", Reason="", readiness=true. Elapsed: 22.18663119s Aug 9 23:31:59.224: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Running", Reason="", readiness=true. Elapsed: 24.190544307s Aug 9 23:32:01.487: INFO: Pod "pod-subpath-test-configmap-9lv7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.453768209s STEP: Saw pod success Aug 9 23:32:01.487: INFO: Pod "pod-subpath-test-configmap-9lv7" satisfied condition "Succeeded or Failed" Aug 9 23:32:01.490: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-9lv7 container test-container-subpath-configmap-9lv7: STEP: delete the pod Aug 9 23:32:01.621: INFO: Waiting for pod pod-subpath-test-configmap-9lv7 to disappear Aug 9 23:32:01.893: INFO: Pod pod-subpath-test-configmap-9lv7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-9lv7 Aug 9 23:32:01.893: INFO: Deleting pod "pod-subpath-test-configmap-9lv7" in namespace "subpath-4065" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:32:01.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4065" for this suite. • [SLOW TEST:27.174 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":32,"skipped":450,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:32:01.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 9 23:32:02.786: INFO: Waiting up to 5m0s for pod "pod-bf8d016e-d1e1-46bf-a570-870bd143c07c" in namespace "emptydir-3476" to be "Succeeded or Failed" Aug 9 23:32:02.961: INFO: Pod "pod-bf8d016e-d1e1-46bf-a570-870bd143c07c": Phase="Pending", Reason="", readiness=false. Elapsed: 174.201313ms Aug 9 23:32:04.965: INFO: Pod "pod-bf8d016e-d1e1-46bf-a570-870bd143c07c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17831041s Aug 9 23:32:07.067: INFO: Pod "pod-bf8d016e-d1e1-46bf-a570-870bd143c07c": Phase="Running", Reason="", readiness=true. Elapsed: 4.280839001s Aug 9 23:32:09.072: INFO: Pod "pod-bf8d016e-d1e1-46bf-a570-870bd143c07c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.285151784s STEP: Saw pod success Aug 9 23:32:09.072: INFO: Pod "pod-bf8d016e-d1e1-46bf-a570-870bd143c07c" satisfied condition "Succeeded or Failed" Aug 9 23:32:09.075: INFO: Trying to get logs from node latest-worker2 pod pod-bf8d016e-d1e1-46bf-a570-870bd143c07c container test-container: STEP: delete the pod Aug 9 23:32:09.095: INFO: Waiting for pod pod-bf8d016e-d1e1-46bf-a570-870bd143c07c to disappear Aug 9 23:32:09.100: INFO: Pod pod-bf8d016e-d1e1-46bf-a570-870bd143c07c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:32:09.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3476" for this suite. • [SLOW TEST:7.202 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":33,"skipped":455,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:32:09.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 9 23:32:09.153: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 9 23:32:09.170: INFO: Waiting for terminating namespaces to be deleted... Aug 9 23:32:09.193: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 9 23:32:09.199: INFO: rally-21b6a035-bqwtbx5n-6497df95bc-mv88p from c-rally-21b6a035-vjjcei4a started at 2020-08-09 23:31:58 +0000 UTC (1 container statuses recorded) Aug 9 23:32:09.199: INFO: Container rally-21b6a035-bqwtbx5n ready: false, restart count 0 Aug 9 23:32:09.199: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 9 23:32:09.199: INFO: Container coredns ready: true, restart count 0 Aug 9 23:32:09.199: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Aug 9 23:32:09.199: INFO: Container coredns ready: true, restart count 0 Aug 9 23:32:09.199: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 9 23:32:09.199: INFO: Container kindnet-cni ready: true, restart count 0 Aug 9 23:32:09.199: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 9 23:32:09.199: INFO: Container kube-proxy ready: true, restart count 0 Aug 9 23:32:09.199: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 9 23:32:09.199: INFO: Container local-path-provisioner ready: true, restart count 0 Aug 9 23:32:09.199: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 9 23:32:09.203: INFO: rally-21b6a035-bqwtbx5n-6497df95bc-z8vsg from c-rally-21b6a035-vjjcei4a started at 2020-08-09 23:31:58 +0000 UTC (1 container statuses recorded) Aug 9 23:32:09.203: INFO: Container rally-21b6a035-bqwtbx5n ready: false, restart count 0 Aug 9 23:32:09.203: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 9 23:32:09.203: INFO: Container kindnet-cni ready: true, restart count 0 Aug 9 23:32:09.203: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 9 23:32:09.203: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c3ad20a3-d8c8-4384-a571-f0dd0bf22daf 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c3ad20a3-d8c8-4384-a571-f0dd0bf22daf off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-c3ad20a3-d8c8-4384-a571-f0dd0bf22daf [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:32:17.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2946" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.239 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":34,"skipped":464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:32:17.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:32:52.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6706" for this suite. • [SLOW TEST:35.304 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":35,"skipped":496,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:32:52.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Aug 9 23:32:52.710: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:33:09.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5928" for this suite. • [SLOW TEST:17.107 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":36,"skipped":578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:33:09.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 9 23:33:10.428: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 9 23:33:12.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612790, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612790, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612790, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612790, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 9 23:33:14.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612790, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612790, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612790, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732612790, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 9 23:33:17.478: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:33:17.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7992" for this suite. STEP: Destroying namespace "webhook-7992-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.894 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":37,"skipped":612,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:33:17.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:33:17.802: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:33:24.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8665" for this suite. • [SLOW TEST:6.381 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":38,"skipped":638,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:33:24.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-31f0dcbc-9590-4b63-8ab5-3e5f8df7f994 STEP: Creating a pod to test consume configMaps Aug 9 23:33:24.162: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-61315053-ce02-4bf6-ba58-d75b5ae7cec2" in namespace "projected-3345" to be "Succeeded or Failed" Aug 9 23:33:24.186: INFO: Pod "pod-projected-configmaps-61315053-ce02-4bf6-ba58-d75b5ae7cec2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.871845ms Aug 9 23:33:26.190: INFO: Pod "pod-projected-configmaps-61315053-ce02-4bf6-ba58-d75b5ae7cec2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028510731s Aug 9 23:33:28.194: INFO: Pod "pod-projected-configmaps-61315053-ce02-4bf6-ba58-d75b5ae7cec2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032061982s STEP: Saw pod success Aug 9 23:33:28.194: INFO: Pod "pod-projected-configmaps-61315053-ce02-4bf6-ba58-d75b5ae7cec2" satisfied condition "Succeeded or Failed" Aug 9 23:33:28.197: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-61315053-ce02-4bf6-ba58-d75b5ae7cec2 container projected-configmap-volume-test: STEP: delete the pod Aug 9 23:33:28.473: INFO: Waiting for pod pod-projected-configmaps-61315053-ce02-4bf6-ba58-d75b5ae7cec2 to disappear Aug 9 23:33:28.505: INFO: Pod pod-projected-configmaps-61315053-ce02-4bf6-ba58-d75b5ae7cec2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:33:28.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3345" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":39,"skipped":643,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:33:28.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 9 23:33:32.651: INFO: &Pod{ObjectMeta:{send-events-d54ae0ce-e4b1-4259-813e-9a7953029823 events-4731 /api/v1/namespaces/events-4731/pods/send-events-d54ae0ce-e4b1-4259-813e-9a7953029823 1029db1a-6bd3-408d-b616-52b3b6a909a4 5768056 0 2020-08-09 23:33:28 +0000 UTC map[name:foo time:581827435] map[] [] [] [{e2e.test Update v1 2020-08-09 23:33:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:33:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.70\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-28kmn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-28kmn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-28kmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:33:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:33:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:33:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.70,StartTime:2020-08-09 23:33:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-09 23:33:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://fb94d0b15b24061562ffdc3ba81cb3351e7a34298a4ce8bff636ca8ed843243a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.70,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Aug 9 23:33:34.657: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 9 23:33:36.662: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:33:36.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4731" for this suite. • [SLOW TEST:8.167 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":40,"skipped":652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:33:36.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 9 23:33:36.866: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:33:36.877: INFO: Number of nodes with available pods: 0 Aug 9 23:33:36.877: INFO: Node latest-worker is running more than one daemon pod Aug 9 23:33:37.962: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:33:37.965: INFO: Number of nodes with available pods: 0 Aug 9 23:33:37.965: INFO: Node latest-worker is running more than one daemon pod Aug 9 23:33:38.882: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:33:38.886: INFO: Number of nodes with available pods: 0 Aug 9 23:33:38.886: INFO: Node latest-worker is running more than one daemon pod Aug 9 23:33:40.196: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:33:40.199: INFO: Number of nodes with available pods: 0 Aug 9 23:33:40.199: INFO: Node latest-worker is running more than one daemon pod Aug 9 23:33:40.882: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:33:40.886: INFO: Number of nodes with available pods: 0 Aug 9 23:33:40.886: INFO: Node latest-worker is running more than one daemon pod Aug 9 23:33:41.882: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:33:41.904: INFO: Number of nodes with available pods: 1 Aug 9 23:33:41.904: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:33:42.885: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:33:42.888: INFO: Number of nodes with available pods: 2 Aug 9 23:33:42.888: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 9 23:33:42.931: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:33:42.938: INFO: Number of nodes with available pods: 2 Aug 9 23:33:42.938: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9119, will wait for the garbage collector to delete the pods Aug 9 23:33:44.192: INFO: Deleting DaemonSet.extensions daemon-set took: 5.477519ms Aug 9 23:33:44.692: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.249454ms Aug 9 23:33:53.919: INFO: Number of nodes with available pods: 0 Aug 9 23:33:53.919: INFO: Number of running nodes: 0, number of available pods: 0 Aug 9 23:33:53.925: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9119/daemonsets","resourceVersion":"5768253"},"items":null} Aug 9 23:33:53.928: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9119/pods","resourceVersion":"5768253"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:33:54.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9119" for this suite. • [SLOW TEST:17.642 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":41,"skipped":693,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:33:54.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Aug 9 23:33:54.455: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:34:02.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4974" for this suite. • [SLOW TEST:7.819 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":42,"skipped":720,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:34:02.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:34:18.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2672" for this suite. • [SLOW TEST:16.264 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":43,"skipped":730,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:34:18.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 9 23:34:18.500: INFO: Waiting up to 5m0s for pod "pod-c4f6f134-b286-42b6-a0a5-fcb6bc3e2753" in namespace "emptydir-6948" to be "Succeeded or Failed" Aug 9 23:34:18.504: INFO: Pod "pod-c4f6f134-b286-42b6-a0a5-fcb6bc3e2753": Phase="Pending", Reason="", readiness=false. Elapsed: 3.761593ms Aug 9 23:34:20.681: INFO: Pod "pod-c4f6f134-b286-42b6-a0a5-fcb6bc3e2753": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181555553s Aug 9 23:34:22.685: INFO: Pod "pod-c4f6f134-b286-42b6-a0a5-fcb6bc3e2753": Phase="Running", Reason="", readiness=true. Elapsed: 4.185382755s Aug 9 23:34:24.690: INFO: Pod "pod-c4f6f134-b286-42b6-a0a5-fcb6bc3e2753": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.189975325s STEP: Saw pod success Aug 9 23:34:24.690: INFO: Pod "pod-c4f6f134-b286-42b6-a0a5-fcb6bc3e2753" satisfied condition "Succeeded or Failed" Aug 9 23:34:24.693: INFO: Trying to get logs from node latest-worker2 pod pod-c4f6f134-b286-42b6-a0a5-fcb6bc3e2753 container test-container: STEP: delete the pod Aug 9 23:34:24.725: INFO: Waiting for pod pod-c4f6f134-b286-42b6-a0a5-fcb6bc3e2753 to disappear Aug 9 23:34:24.742: INFO: Pod pod-c4f6f134-b286-42b6-a0a5-fcb6bc3e2753 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:34:24.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6948" for this suite. • [SLOW TEST:6.339 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":44,"skipped":738,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:34:24.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:34:24.992: INFO: The status of Pod test-webserver-7572ecde-78fd-4b63-9900-92ce01b83769 is Pending, waiting for it to be Running (with Ready = true) Aug 9 23:34:27.011: INFO: The status of Pod test-webserver-7572ecde-78fd-4b63-9900-92ce01b83769 is Pending, waiting for it to be Running (with Ready = true) Aug 9 23:34:28.995: INFO: The status of Pod test-webserver-7572ecde-78fd-4b63-9900-92ce01b83769 is Running (Ready = false) Aug 9 23:34:31.002: INFO: The status of Pod test-webserver-7572ecde-78fd-4b63-9900-92ce01b83769 is Running (Ready = false) Aug 9 23:34:32.997: INFO: The status of Pod test-webserver-7572ecde-78fd-4b63-9900-92ce01b83769 is Running (Ready = false) Aug 9 23:34:34.997: INFO: The status of Pod test-webserver-7572ecde-78fd-4b63-9900-92ce01b83769 is Running (Ready = false) Aug 9 23:34:36.997: INFO: The status of Pod test-webserver-7572ecde-78fd-4b63-9900-92ce01b83769 is Running (Ready = false) Aug 9 23:34:38.997: INFO: The status of Pod test-webserver-7572ecde-78fd-4b63-9900-92ce01b83769 is Running (Ready = false) Aug 9 23:34:40.996: INFO: The status of Pod test-webserver-7572ecde-78fd-4b63-9900-92ce01b83769 is Running (Ready = false) Aug 9 23:34:42.996: INFO: The status of Pod test-webserver-7572ecde-78fd-4b63-9900-92ce01b83769 is Running (Ready = true) Aug 9 23:34:42.998: INFO: Container started at 2020-08-09 23:34:27 +0000 UTC, pod became ready at 2020-08-09 23:34:42 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:34:42.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2419" for this suite. • [SLOW TEST:18.257 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":45,"skipped":747,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:34:43.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0809 23:34:44.172912 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 9 23:35:46.460: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:35:46.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9321" for this suite. • [SLOW TEST:63.869 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":46,"skipped":748,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:35:46.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Aug 9 23:37:47.997: INFO: Successfully updated pod "var-expansion-5f6e7697-bdab-407f-9909-ff3d7fdaf21f" STEP: waiting for pod running STEP: deleting the pod gracefully Aug 9 23:37:50.030: INFO: Deleting pod "var-expansion-5f6e7697-bdab-407f-9909-ff3d7fdaf21f" in namespace "var-expansion-9261" Aug 9 23:37:50.035: INFO: Wait up to 5m0s for pod "var-expansion-5f6e7697-bdab-407f-9909-ff3d7fdaf21f" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:38:34.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9261" for this suite. • [SLOW TEST:167.195 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":47,"skipped":761,"failed":0} SS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:38:34.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 9 23:38:34.249: INFO: starting watch STEP: patching STEP: updating Aug 9 23:38:34.265: INFO: waiting for watch events with expected annotations Aug 9 23:38:34.265: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:38:34.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-773" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":48,"skipped":763,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:38:34.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:38:45.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9674" for this suite. • [SLOW TEST:11.268 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":49,"skipped":772,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:38:45.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:39:01.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6825" for this suite. • [SLOW TEST:16.309 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":50,"skipped":779,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:39:01.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 9 23:39:01.956: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:39:09.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4417" for this suite. • [SLOW TEST:7.997 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":51,"skipped":782,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:39:09.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-9246b408-0d4b-4479-9165-4f845c476585 STEP: Creating secret with name secret-projected-all-test-volume-032ea356-61c6-4d2f-95da-c76f16471256 STEP: Creating a pod to test Check all projections for projected volume plugin Aug 9 23:39:10.041: INFO: Waiting up to 5m0s for pod "projected-volume-6037d006-d520-4c96-a284-5af22533df6d" in namespace "projected-8488" to be "Succeeded or Failed" Aug 9 23:39:10.045: INFO: Pod "projected-volume-6037d006-d520-4c96-a284-5af22533df6d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.983512ms Aug 9 23:39:12.050: INFO: Pod "projected-volume-6037d006-d520-4c96-a284-5af22533df6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008837332s Aug 9 23:39:14.054: INFO: Pod "projected-volume-6037d006-d520-4c96-a284-5af22533df6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012949191s STEP: Saw pod success Aug 9 23:39:14.054: INFO: Pod "projected-volume-6037d006-d520-4c96-a284-5af22533df6d" satisfied condition "Succeeded or Failed" Aug 9 23:39:14.057: INFO: Trying to get logs from node latest-worker2 pod projected-volume-6037d006-d520-4c96-a284-5af22533df6d container projected-all-volume-test: STEP: delete the pod Aug 9 23:39:14.350: INFO: Waiting for pod projected-volume-6037d006-d520-4c96-a284-5af22533df6d to disappear Aug 9 23:39:14.362: INFO: Pod projected-volume-6037d006-d520-4c96-a284-5af22533df6d no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:39:14.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8488" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":52,"skipped":799,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:39:14.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 9 23:39:14.489: INFO: Waiting up to 5m0s for pod "pod-af37dc80-ce92-4489-b97c-402d63375273" in namespace "emptydir-3079" to be "Succeeded or Failed" Aug 9 23:39:14.494: INFO: Pod "pod-af37dc80-ce92-4489-b97c-402d63375273": Phase="Pending", Reason="", readiness=false. Elapsed: 4.476516ms Aug 9 23:39:16.498: INFO: Pod "pod-af37dc80-ce92-4489-b97c-402d63375273": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008398245s Aug 9 23:39:18.505: INFO: Pod "pod-af37dc80-ce92-4489-b97c-402d63375273": Phase="Running", Reason="", readiness=true. Elapsed: 4.015474583s Aug 9 23:39:20.509: INFO: Pod "pod-af37dc80-ce92-4489-b97c-402d63375273": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019741147s STEP: Saw pod success Aug 9 23:39:20.509: INFO: Pod "pod-af37dc80-ce92-4489-b97c-402d63375273" satisfied condition "Succeeded or Failed" Aug 9 23:39:20.512: INFO: Trying to get logs from node latest-worker2 pod pod-af37dc80-ce92-4489-b97c-402d63375273 container test-container: STEP: delete the pod Aug 9 23:39:20.548: INFO: Waiting for pod pod-af37dc80-ce92-4489-b97c-402d63375273 to disappear Aug 9 23:39:20.560: INFO: Pod pod-af37dc80-ce92-4489-b97c-402d63375273 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:39:20.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3079" for this suite. • [SLOW TEST:6.224 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":53,"skipped":834,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:39:20.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:39:20.642: INFO: Creating deployment "webserver-deployment" Aug 9 23:39:20.646: INFO: Waiting for observed generation 1 Aug 9 23:39:22.666: INFO: Waiting for all required pods to come up Aug 9 23:39:22.672: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 9 23:39:32.683: INFO: Waiting for deployment "webserver-deployment" to complete Aug 9 23:39:32.688: INFO: Updating deployment "webserver-deployment" with a non-existent image Aug 9 23:39:32.695: INFO: Updating deployment webserver-deployment Aug 9 23:39:32.695: INFO: Waiting for observed generation 2 Aug 9 23:39:34.724: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 9 23:39:34.728: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 9 23:39:34.731: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 9 23:39:34.738: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 9 23:39:34.738: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 9 23:39:34.740: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 9 23:39:34.743: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Aug 9 23:39:34.743: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Aug 9 23:39:34.750: INFO: Updating deployment webserver-deployment Aug 9 23:39:34.750: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Aug 9 23:39:34.832: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 9 23:39:35.010: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 9 23:39:37.646: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6933 /apis/apps/v1/namespaces/deployment-6933/deployments/webserver-deployment feb13398-4aaf-4e0d-a280-d3ba5ac34345 5770467 3 2020-08-09 23:39:20 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-09 23:39:34 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000478f98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-09 23:39:34 +0000 UTC,LastTransitionTime:2020-08-09 23:39:34 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-08-09 23:39:35 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Aug 9 23:39:37.952: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-6933 /apis/apps/v1/namespaces/deployment-6933/replicasets/webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 5770455 3 2020-08-09 23:39:32 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment feb13398-4aaf-4e0d-a280-d3ba5ac34345 0xc000479427 0xc000479428}] [] [{kube-controller-manager Update apps/v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"feb13398-4aaf-4e0d-a280-d3ba5ac34345\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0004794b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 9 23:39:37.952: INFO: All old ReplicaSets of Deployment "webserver-deployment": Aug 9 23:39:37.952: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-6933 /apis/apps/v1/namespaces/deployment-6933/replicasets/webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 5770461 3 2020-08-09 23:39:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment feb13398-4aaf-4e0d-a280-d3ba5ac34345 0xc000479527 0xc000479528}] [] [{kube-controller-manager Update apps/v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"feb13398-4aaf-4e0d-a280-d3ba5ac34345\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000479598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Aug 9 23:39:38.259: INFO: Pod "webserver-deployment-795d758f88-2jphv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-2jphv webserver-deployment-795d758f88- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-795d758f88-2jphv 674f3c6b-bf0f-42cf-b584-a1f2f5da4f77 5770349 0 2020-08-09 23:39:32 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 0xc0021221f7 0xc0021221f8}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873e33f4-89f5-4ddb-8462-6b2202a419e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-09 23:39:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.260: INFO: Pod "webserver-deployment-795d758f88-445wr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-445wr webserver-deployment-795d758f88- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-795d758f88-445wr eb0a0409-e914-41d3-b5ae-60cf1f4e0521 5770342 0 2020-08-09 23:39:32 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 0xc0021223c7 0xc0021223c8}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873e33f4-89f5-4ddb-8462-6b2202a419e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-09 23:39:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.260: INFO: Pod "webserver-deployment-795d758f88-5x9hg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5x9hg webserver-deployment-795d758f88- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-795d758f88-5x9hg 74e21443-e262-4bb8-8364-7b194306f6e3 5770459 0 2020-08-09 23:39:34 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 0xc002122597 0xc002122598}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873e33f4-89f5-4ddb-8462-6b2202a419e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.260: INFO: Pod "webserver-deployment-795d758f88-6msxb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-6msxb webserver-deployment-795d758f88- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-795d758f88-6msxb 54cee394-4b64-4a1a-972f-a7593f7b8fdb 5770355 0 2020-08-09 23:39:32 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 0xc002122757 0xc002122758}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873e33f4-89f5-4ddb-8462-6b2202a419e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-09 23:39:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.261: INFO: Pod "webserver-deployment-795d758f88-ck4jm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ck4jm webserver-deployment-795d758f88- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-795d758f88-ck4jm 53f310b3-3fc4-49a1-8767-40a9ae7f7321 5770372 0 2020-08-09 23:39:33 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 0xc002122917 0xc002122918}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873e33f4-89f5-4ddb-8462-6b2202a419e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-09 23:39:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.261: INFO: Pod "webserver-deployment-795d758f88-dj5dd" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-dj5dd webserver-deployment-795d758f88- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-795d758f88-dj5dd c78299ff-b5e4-47a3-bf19-172e6918c45b 5770471 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 0xc002122ac7 0xc002122ac8}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873e33f4-89f5-4ddb-8462-6b2202a419e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.262: INFO: Pod "webserver-deployment-795d758f88-dqkh7" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-dqkh7 webserver-deployment-795d758f88- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-795d758f88-dqkh7 75cc4d96-0f6f-4838-8559-9e4d3b9a3483 5770485 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 0xc002122c77 0xc002122c78}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873e33f4-89f5-4ddb-8462-6b2202a419e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.262: INFO: Pod "webserver-deployment-795d758f88-qhmlb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-qhmlb webserver-deployment-795d758f88- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-795d758f88-qhmlb 6bf1091c-de26-401d-a3fa-d603c3ab483d 5770475 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 0xc002122e37 0xc002122e38}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873e33f4-89f5-4ddb-8462-6b2202a419e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.262: INFO: Pod "webserver-deployment-795d758f88-rcszl" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-rcszl webserver-deployment-795d758f88- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-795d758f88-rcszl 3ae4d61d-9b8d-4463-b03b-a410717e006f 5770488 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 0xc002122ff7 0xc002122ff8}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873e33f4-89f5-4ddb-8462-6b2202a419e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.263: INFO: Pod "webserver-deployment-795d758f88-rmlfm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-rmlfm webserver-deployment-795d758f88- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-795d758f88-rmlfm 85353952-74c9-42ba-8150-49362a901982 5770370 0 2020-08-09 23:39:33 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 0xc0021231a7 0xc0021231a8}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873e33f4-89f5-4ddb-8462-6b2202a419e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-09 23:39:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.263: INFO: Pod "webserver-deployment-795d758f88-wcd5m" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wcd5m webserver-deployment-795d758f88- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-795d758f88-wcd5m 3c25003d-1875-430b-bd3d-96c8748336bf 5770495 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 0xc002123357 0xc002123358}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873e33f4-89f5-4ddb-8462-6b2202a419e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.263: INFO: Pod "webserver-deployment-795d758f88-wg8fj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wg8fj webserver-deployment-795d758f88- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-795d758f88-wg8fj 1d8bfdcb-41ce-4555-ade1-7b510c4a0421 5770449 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 0xc002123517 0xc002123518}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873e33f4-89f5-4ddb-8462-6b2202a419e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.263: INFO: Pod "webserver-deployment-795d758f88-zqf7g" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zqf7g webserver-deployment-795d758f88- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-795d758f88-zqf7g 104411cd-6e23-4fd5-9bc2-4148ce04719e 5770493 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 873e33f4-89f5-4ddb-8462-6b2202a419e9 0xc002123657 0xc002123658}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"873e33f4-89f5-4ddb-8462-6b2202a419e9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.264: INFO: Pod "webserver-deployment-dd94f59b7-4g9b8" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4g9b8 webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-4g9b8 741498ab-3a37-41b3-9c9e-7e2e99ed94b1 5770271 0 2020-08-09 23:39:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc002123837 0xc002123838}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.33\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.33,StartTime:2020-08-09 23:39:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-09 23:39:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a839c4d151abbfab41848efc1aa5d38fb53e7c69562c66d82d68094d57749f5f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.264: INFO: Pod "webserver-deployment-dd94f59b7-5v7g4" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-5v7g4 webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-5v7g4 09590c51-f76d-45e3-919b-8546d3f7a18e 5770469 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0021239e7 0xc0021239e8}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.264: INFO: Pod "webserver-deployment-dd94f59b7-6bmjf" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6bmjf webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-6bmjf 92db362c-69ee-4442-994c-fd8ccad2c7c5 5770489 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc002123c17 0xc002123c18}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.265: INFO: Pod "webserver-deployment-dd94f59b7-77dsk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-77dsk webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-77dsk 9f79c407-2585-49eb-a96c-4751890b1717 5770290 0 2020-08-09 23:39:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc002123da7 0xc002123da8}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.35\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.35,StartTime:2020-08-09 23:39:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-09 23:39:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://724679a92f3ad7d7b0739b73d8a655e80e19c74f8d7b9dfc3a3656c547706fd0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.265: INFO: Pod "webserver-deployment-dd94f59b7-99d6n" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-99d6n webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-99d6n 67d73226-3fc7-4050-a6b7-92c7c63e278c 5770465 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc002123f77 0xc002123f78}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.265: INFO: Pod "webserver-deployment-dd94f59b7-9fjdj" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9fjdj webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-9fjdj e715ebee-96d2-447e-ab6a-7f51820ccb08 5770481 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a0117 0xc0022a0118}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.265: INFO: Pod "webserver-deployment-dd94f59b7-bm4mb" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bm4mb webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-bm4mb eb26d5ec-6d1b-4346-9847-92f22fd4920d 5770301 0 2020-08-09 23:39:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a02b7 0xc0022a02b8}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.95\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.95,StartTime:2020-08-09 23:39:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-09 23:39:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8670c068ee68b6632c5caf4886d5b8da6e339c254bd2929ee8cda6da657caf41,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.95,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.266: INFO: Pod "webserver-deployment-dd94f59b7-bskzt" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bskzt webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-bskzt 83e2d770-7b4d-4132-a83b-ed2b604ef024 5770498 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a0467 0xc0022a0468}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.266: INFO: Pod "webserver-deployment-dd94f59b7-hdqps" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hdqps webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-hdqps 9f27bb07-cc5a-412d-94c7-e3c0901c150d 5770483 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a05f7 0xc0022a05f8}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.267: INFO: Pod "webserver-deployment-dd94f59b7-hrl8r" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hrl8r webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-hrl8r b6b520c0-1c9f-402d-8d5b-368d8c91c9bc 5770263 0 2020-08-09 23:39:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a0787 0xc0022a0788}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.92\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.92,StartTime:2020-08-09 23:39:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-09 23:39:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://742028eee858987d834777965c4bf26614c6256ac12450b8b4fb53f17abf232a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.267: INFO: Pod "webserver-deployment-dd94f59b7-kc5p6" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-kc5p6 webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-kc5p6 ab10d9fd-a537-42c7-8de5-621ddf6f5f56 5770298 0 2020-08-09 23:39:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a0ad7 0xc0022a0ad8}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.36\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.36,StartTime:2020-08-09 23:39:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-09 23:39:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://efa4f80fb93dbb4c62516868f8a60dc61b2bf2e6e85e325b458a495fd4c903ee,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.268: INFO: Pod "webserver-deployment-dd94f59b7-l67mk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-l67mk webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-l67mk 3955bdea-03ef-4f4b-8c0a-cd39179621c4 5770305 0 2020-08-09 23:39:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a0d67 0xc0022a0d68}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.93\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.93,StartTime:2020-08-09 23:39:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-09 23:39:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://da3cee969fe8d3742f41d6aadf23f1239d00f53b9d4a6dcf02d5aba75d5f4f44,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.268: INFO: Pod "webserver-deployment-dd94f59b7-nfkfr" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-nfkfr webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-nfkfr f58167e5-ae30-4c06-8498-8a7cf1d38dd7 5770441 0 2020-08-09 23:39:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a1047 0xc0022a1048}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.268: INFO: Pod "webserver-deployment-dd94f59b7-pm2gm" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-pm2gm webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-pm2gm 8d4a254c-ede7-4a3b-9a4b-6bd0208f669d 5770477 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a13f7 0xc0022a13f8}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.269: INFO: Pod "webserver-deployment-dd94f59b7-rjdcp" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rjdcp webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-rjdcp a2455f20-3950-405b-a98d-1e9ade3f1f9c 5770503 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a15e7 0xc0022a15e8}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.269: INFO: Pod "webserver-deployment-dd94f59b7-rwzfw" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rwzfw webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-rwzfw 1c6b4e47-991c-488e-9f03-d7967d1820c5 5770463 0 2020-08-09 23:39:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a1907 0xc0022a1908}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.269: INFO: Pod "webserver-deployment-dd94f59b7-tlnd7" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-tlnd7 webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-tlnd7 7e505ad0-2b12-4e8c-a197-19eeb323c49d 5770504 0 2020-08-09 23:39:35 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a1ae7 0xc0022a1ae8}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.270: INFO: Pod "webserver-deployment-dd94f59b7-v84kl" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-v84kl webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-v84kl 0c66d302-0054-4b6b-8f11-91a667d567fc 5770456 0 2020-08-09 23:39:34 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a1c77 0xc0022a1c78}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:35 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-09 23:39:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.270: INFO: Pod "webserver-deployment-dd94f59b7-wpbvp" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-wpbvp webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-wpbvp 0c0f69e0-0928-4e25-b8bd-85a1bf557e12 5770251 0 2020-08-09 23:39:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc0022a1f87 0xc0022a1f88}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.91\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.91,StartTime:2020-08-09 23:39:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-09 23:39:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0297c881ac66fd3b956a3ef2d5b0e45a030d7e3bd09bf90bc0e7353b011b82e2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.91,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:39:38.270: INFO: Pod "webserver-deployment-dd94f59b7-zq6jt" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zq6jt webserver-deployment-dd94f59b7- deployment-6933 /api/v1/namespaces/deployment-6933/pods/webserver-deployment-dd94f59b7-zq6jt df8be782-aab7-43d7-b459-61e4db36e016 5770282 0 2020-08-09 23:39:20 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 7b77b39f-14f6-4b41-a01d-ce9600cc21ce 0xc003596277 0xc003596278}] [] [{kube-controller-manager Update v1 2020-08-09 23:39:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b77b39f-14f6-4b41-a01d-ce9600cc21ce\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-09 23:39:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j999f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j999f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j999f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-09 23:39:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.14,PodIP:10.244.1.34,StartTime:2020-08-09 23:39:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-09 23:39:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1a0dd57edc1474d9670b7ee339875b23899b83453dc04ad5faf17e49fa1f3c2b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:39:38.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6933" for this suite. • [SLOW TEST:18.222 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":54,"skipped":857,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:39:38.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-dc4befbf-62a7-4b3e-b348-092844e22e25 STEP: Creating a pod to test consume configMaps Aug 9 23:39:39.833: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8cb29013-82a4-4733-bce7-b780fa963e1f" in namespace "projected-2705" to be "Succeeded or Failed" Aug 9 23:39:39.996: INFO: Pod "pod-projected-configmaps-8cb29013-82a4-4733-bce7-b780fa963e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 163.500878ms Aug 9 23:39:42.006: INFO: Pod "pod-projected-configmaps-8cb29013-82a4-4733-bce7-b780fa963e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173283931s Aug 9 23:39:44.275: INFO: Pod "pod-projected-configmaps-8cb29013-82a4-4733-bce7-b780fa963e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.442779913s Aug 9 23:39:46.350: INFO: Pod "pod-projected-configmaps-8cb29013-82a4-4733-bce7-b780fa963e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.517396468s Aug 9 23:39:49.058: INFO: Pod "pod-projected-configmaps-8cb29013-82a4-4733-bce7-b780fa963e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.225556129s Aug 9 23:39:51.165: INFO: Pod "pod-projected-configmaps-8cb29013-82a4-4733-bce7-b780fa963e1f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.332041176s Aug 9 23:39:53.174: INFO: Pod "pod-projected-configmaps-8cb29013-82a4-4733-bce7-b780fa963e1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.34126999s STEP: Saw pod success Aug 9 23:39:53.174: INFO: Pod "pod-projected-configmaps-8cb29013-82a4-4733-bce7-b780fa963e1f" satisfied condition "Succeeded or Failed" Aug 9 23:39:53.180: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-8cb29013-82a4-4733-bce7-b780fa963e1f container projected-configmap-volume-test: STEP: delete the pod Aug 9 23:39:53.261: INFO: Waiting for pod pod-projected-configmaps-8cb29013-82a4-4733-bce7-b780fa963e1f to disappear Aug 9 23:39:53.283: INFO: Pod pod-projected-configmaps-8cb29013-82a4-4733-bce7-b780fa963e1f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:39:53.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2705" for this suite. • [SLOW TEST:14.480 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":55,"skipped":896,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:39:53.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-a14bc450-f3f1-42f6-95a5-bbac2f6eafdf STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a14bc450-f3f1-42f6-95a5-bbac2f6eafdf STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:41:22.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3507" for this suite. • [SLOW TEST:89.080 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":56,"skipped":910,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:41:22.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Aug 9 23:41:22.537: INFO: Waiting up to 5m0s for pod "var-expansion-ab1a925a-2d4e-4177-a52a-14bd09031921" in namespace "var-expansion-9677" to be "Succeeded or Failed" Aug 9 23:41:22.560: INFO: Pod "var-expansion-ab1a925a-2d4e-4177-a52a-14bd09031921": Phase="Pending", Reason="", readiness=false. Elapsed: 22.842367ms Aug 9 23:41:24.639: INFO: Pod "var-expansion-ab1a925a-2d4e-4177-a52a-14bd09031921": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102016615s Aug 9 23:41:26.643: INFO: Pod "var-expansion-ab1a925a-2d4e-4177-a52a-14bd09031921": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105919935s STEP: Saw pod success Aug 9 23:41:26.643: INFO: Pod "var-expansion-ab1a925a-2d4e-4177-a52a-14bd09031921" satisfied condition "Succeeded or Failed" Aug 9 23:41:26.645: INFO: Trying to get logs from node latest-worker2 pod var-expansion-ab1a925a-2d4e-4177-a52a-14bd09031921 container dapi-container: STEP: delete the pod Aug 9 23:41:26.692: INFO: Waiting for pod var-expansion-ab1a925a-2d4e-4177-a52a-14bd09031921 to disappear Aug 9 23:41:26.697: INFO: Pod var-expansion-ab1a925a-2d4e-4177-a52a-14bd09031921 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:41:26.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9677" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":57,"skipped":954,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:41:26.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-4lr4 STEP: Creating a pod to test atomic-volume-subpath Aug 9 23:41:26.837: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-4lr4" in namespace "subpath-9240" to be "Succeeded or Failed" Aug 9 23:41:26.858: INFO: Pod "pod-subpath-test-downwardapi-4lr4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.295522ms Aug 9 23:41:28.983: INFO: Pod "pod-subpath-test-downwardapi-4lr4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14597293s Aug 9 23:41:30.987: INFO: Pod "pod-subpath-test-downwardapi-4lr4": Phase="Running", Reason="", readiness=true. Elapsed: 4.149350872s Aug 9 23:41:32.990: INFO: Pod "pod-subpath-test-downwardapi-4lr4": Phase="Running", Reason="", readiness=true. Elapsed: 6.152956319s Aug 9 23:41:34.995: INFO: Pod "pod-subpath-test-downwardapi-4lr4": Phase="Running", Reason="", readiness=true. Elapsed: 8.157492607s Aug 9 23:41:37.000: INFO: Pod "pod-subpath-test-downwardapi-4lr4": Phase="Running", Reason="", readiness=true. Elapsed: 10.162198585s Aug 9 23:41:39.004: INFO: Pod "pod-subpath-test-downwardapi-4lr4": Phase="Running", Reason="", readiness=true. Elapsed: 12.166623649s Aug 9 23:41:41.008: INFO: Pod "pod-subpath-test-downwardapi-4lr4": Phase="Running", Reason="", readiness=true. Elapsed: 14.17059847s Aug 9 23:41:43.013: INFO: Pod "pod-subpath-test-downwardapi-4lr4": Phase="Running", Reason="", readiness=true. Elapsed: 16.17550461s Aug 9 23:41:45.017: INFO: Pod "pod-subpath-test-downwardapi-4lr4": Phase="Running", Reason="", readiness=true. Elapsed: 18.179255166s Aug 9 23:41:47.021: INFO: Pod "pod-subpath-test-downwardapi-4lr4": Phase="Running", Reason="", readiness=true. Elapsed: 20.183480566s Aug 9 23:41:49.025: INFO: Pod "pod-subpath-test-downwardapi-4lr4": Phase="Running", Reason="", readiness=true. Elapsed: 22.187902332s Aug 9 23:41:51.046: INFO: Pod "pod-subpath-test-downwardapi-4lr4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.208960687s STEP: Saw pod success Aug 9 23:41:51.046: INFO: Pod "pod-subpath-test-downwardapi-4lr4" satisfied condition "Succeeded or Failed" Aug 9 23:41:51.049: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-downwardapi-4lr4 container test-container-subpath-downwardapi-4lr4: STEP: delete the pod Aug 9 23:41:51.099: INFO: Waiting for pod pod-subpath-test-downwardapi-4lr4 to disappear Aug 9 23:41:51.267: INFO: Pod pod-subpath-test-downwardapi-4lr4 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-4lr4 Aug 9 23:41:51.267: INFO: Deleting pod "pod-subpath-test-downwardapi-4lr4" in namespace "subpath-9240" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:41:51.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9240" for this suite. • [SLOW TEST:24.645 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":58,"skipped":965,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:41:51.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-21c34be2-cfb7-4cd7-a6d6-aec2484181b8 STEP: Creating a pod to test consume secrets Aug 9 23:41:51.572: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0afdece1-9a48-49d7-911a-9ef4dda130a8" in namespace "projected-483" to be "Succeeded or Failed" Aug 9 23:41:51.584: INFO: Pod "pod-projected-secrets-0afdece1-9a48-49d7-911a-9ef4dda130a8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.612055ms Aug 9 23:41:53.627: INFO: Pod "pod-projected-secrets-0afdece1-9a48-49d7-911a-9ef4dda130a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055084318s Aug 9 23:41:55.632: INFO: Pod "pod-projected-secrets-0afdece1-9a48-49d7-911a-9ef4dda130a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059878995s STEP: Saw pod success Aug 9 23:41:55.632: INFO: Pod "pod-projected-secrets-0afdece1-9a48-49d7-911a-9ef4dda130a8" satisfied condition "Succeeded or Failed" Aug 9 23:41:55.635: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-0afdece1-9a48-49d7-911a-9ef4dda130a8 container projected-secret-volume-test: STEP: delete the pod Aug 9 23:41:55.699: INFO: Waiting for pod pod-projected-secrets-0afdece1-9a48-49d7-911a-9ef4dda130a8 to disappear Aug 9 23:41:55.702: INFO: Pod pod-projected-secrets-0afdece1-9a48-49d7-911a-9ef4dda130a8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:41:55.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-483" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":59,"skipped":965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:41:55.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 9 23:41:55.768: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e294e42f-a2fa-4a84-9d16-a29e26bb96d8" in namespace "projected-6617" to be "Succeeded or Failed" Aug 9 23:41:55.781: INFO: Pod "downwardapi-volume-e294e42f-a2fa-4a84-9d16-a29e26bb96d8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.21936ms Aug 9 23:41:57.801: INFO: Pod "downwardapi-volume-e294e42f-a2fa-4a84-9d16-a29e26bb96d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033166629s Aug 9 23:41:59.806: INFO: Pod "downwardapi-volume-e294e42f-a2fa-4a84-9d16-a29e26bb96d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038006545s STEP: Saw pod success Aug 9 23:41:59.806: INFO: Pod "downwardapi-volume-e294e42f-a2fa-4a84-9d16-a29e26bb96d8" satisfied condition "Succeeded or Failed" Aug 9 23:41:59.810: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e294e42f-a2fa-4a84-9d16-a29e26bb96d8 container client-container: STEP: delete the pod Aug 9 23:41:59.859: INFO: Waiting for pod downwardapi-volume-e294e42f-a2fa-4a84-9d16-a29e26bb96d8 to disappear Aug 9 23:41:59.865: INFO: Pod downwardapi-volume-e294e42f-a2fa-4a84-9d16-a29e26bb96d8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:41:59.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6617" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":60,"skipped":1004,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:41:59.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Aug 9 23:41:59.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6719' Aug 9 23:42:03.871: INFO: stderr: "" Aug 9 23:42:03.871: INFO: stdout: "pod/pause created\n" Aug 9 23:42:03.871: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 9 23:42:03.871: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6719" to be "running and ready" Aug 9 23:42:03.890: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 18.52587ms Aug 9 23:42:05.895: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02332994s Aug 9 23:42:07.900: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.028352211s Aug 9 23:42:07.900: INFO: Pod "pause" satisfied condition "running and ready" Aug 9 23:42:07.900: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Aug 9 23:42:07.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6719' Aug 9 23:42:08.011: INFO: stderr: "" Aug 9 23:42:08.011: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 9 23:42:08.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6719' Aug 9 23:42:08.131: INFO: stderr: "" Aug 9 23:42:08.131: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 9 23:42:08.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6719' Aug 9 23:42:08.252: INFO: stderr: "" Aug 9 23:42:08.252: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 9 23:42:08.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6719' Aug 9 23:42:08.378: INFO: stderr: "" Aug 9 23:42:08.378: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Aug 9 23:42:08.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6719' Aug 9 23:42:08.525: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 9 23:42:08.525: INFO: stdout: "pod \"pause\" force deleted\n" Aug 9 23:42:08.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6719' Aug 9 23:42:08.636: INFO: stderr: "No resources found in kubectl-6719 namespace.\n" Aug 9 23:42:08.636: INFO: stdout: "" Aug 9 23:42:08.636: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6719 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 9 23:42:08.725: INFO: stderr: "" Aug 9 23:42:08.725: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:42:08.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6719" for this suite. • [SLOW TEST:8.838 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":61,"skipped":1011,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:42:08.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 9 23:42:08.994: INFO: Waiting up to 5m0s for pod "downward-api-d0f339bb-2158-467a-9663-a8b79544c062" in namespace "downward-api-9620" to be "Succeeded or Failed" Aug 9 23:42:09.010: INFO: Pod "downward-api-d0f339bb-2158-467a-9663-a8b79544c062": Phase="Pending", Reason="", readiness=false. Elapsed: 15.944745ms Aug 9 23:42:11.017: INFO: Pod "downward-api-d0f339bb-2158-467a-9663-a8b79544c062": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02216392s Aug 9 23:42:13.021: INFO: Pod "downward-api-d0f339bb-2158-467a-9663-a8b79544c062": Phase="Running", Reason="", readiness=true. Elapsed: 4.026307302s Aug 9 23:42:15.025: INFO: Pod "downward-api-d0f339bb-2158-467a-9663-a8b79544c062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030330962s STEP: Saw pod success Aug 9 23:42:15.025: INFO: Pod "downward-api-d0f339bb-2158-467a-9663-a8b79544c062" satisfied condition "Succeeded or Failed" Aug 9 23:42:15.028: INFO: Trying to get logs from node latest-worker2 pod downward-api-d0f339bb-2158-467a-9663-a8b79544c062 container dapi-container: STEP: delete the pod Aug 9 23:42:15.084: INFO: Waiting for pod downward-api-d0f339bb-2158-467a-9663-a8b79544c062 to disappear Aug 9 23:42:15.094: INFO: Pod downward-api-d0f339bb-2158-467a-9663-a8b79544c062 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:42:15.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9620" for this suite. • [SLOW TEST:6.368 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":62,"skipped":1022,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:42:15.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Aug 9 23:42:15.163: INFO: Waiting up to 5m0s for pod "pod-69596ac4-93d7-4b30-94f3-0e18840f99ee" in namespace "emptydir-552" to be "Succeeded or Failed" Aug 9 23:42:15.166: INFO: Pod "pod-69596ac4-93d7-4b30-94f3-0e18840f99ee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.722421ms Aug 9 23:42:17.171: INFO: Pod "pod-69596ac4-93d7-4b30-94f3-0e18840f99ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007987831s Aug 9 23:42:19.175: INFO: Pod "pod-69596ac4-93d7-4b30-94f3-0e18840f99ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012567049s STEP: Saw pod success Aug 9 23:42:19.175: INFO: Pod "pod-69596ac4-93d7-4b30-94f3-0e18840f99ee" satisfied condition "Succeeded or Failed" Aug 9 23:42:19.178: INFO: Trying to get logs from node latest-worker2 pod pod-69596ac4-93d7-4b30-94f3-0e18840f99ee container test-container: STEP: delete the pod Aug 9 23:42:19.237: INFO: Waiting for pod pod-69596ac4-93d7-4b30-94f3-0e18840f99ee to disappear Aug 9 23:42:19.291: INFO: Pod pod-69596ac4-93d7-4b30-94f3-0e18840f99ee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:42:19.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-552" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":63,"skipped":1048,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:42:19.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Aug 9 23:42:19.653: INFO: Waiting up to 5m0s for pod "var-expansion-f9dca3a0-5771-4fac-aefb-8bbce7a2cebe" in namespace "var-expansion-8381" to be "Succeeded or Failed" Aug 9 23:42:19.706: INFO: Pod "var-expansion-f9dca3a0-5771-4fac-aefb-8bbce7a2cebe": Phase="Pending", Reason="", readiness=false. Elapsed: 52.712835ms Aug 9 23:42:21.710: INFO: Pod "var-expansion-f9dca3a0-5771-4fac-aefb-8bbce7a2cebe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057032739s Aug 9 23:42:23.713: INFO: Pod "var-expansion-f9dca3a0-5771-4fac-aefb-8bbce7a2cebe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060615955s STEP: Saw pod success Aug 9 23:42:23.714: INFO: Pod "var-expansion-f9dca3a0-5771-4fac-aefb-8bbce7a2cebe" satisfied condition "Succeeded or Failed" Aug 9 23:42:23.716: INFO: Trying to get logs from node latest-worker2 pod var-expansion-f9dca3a0-5771-4fac-aefb-8bbce7a2cebe container dapi-container: STEP: delete the pod Aug 9 23:42:23.783: INFO: Waiting for pod var-expansion-f9dca3a0-5771-4fac-aefb-8bbce7a2cebe to disappear Aug 9 23:42:23.788: INFO: Pod var-expansion-f9dca3a0-5771-4fac-aefb-8bbce7a2cebe no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:42:23.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8381" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":64,"skipped":1059,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:42:23.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4244.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4244.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4244.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4244.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4244.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4244.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 9 23:42:29.961: INFO: DNS probes using dns-4244/dns-test-6c6b4bd0-7f97-4dd8-bc23-dc9de48e40de succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:42:29.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4244" for this suite. • [SLOW TEST:6.246 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":65,"skipped":1066,"failed":0} SSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:42:30.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Aug 9 23:42:30.121: INFO: Created pod &Pod{ObjectMeta:{dns-6774 dns-6774 /api/v1/namespaces/dns-6774/pods/dns-6774 30b15c2a-0d79-4806-a8a7-4f1e5a64868b 5771525 0 2020-08-09 23:42:30 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-08-09 23:42:30 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pf7sd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pf7sd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pf7sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 9 23:42:30.299: INFO: The status of Pod dns-6774 is Pending, waiting for it to be Running (with Ready = true) Aug 9 23:42:32.303: INFO: The status of Pod dns-6774 is Pending, waiting for it to be Running (with Ready = true) Aug 9 23:42:34.304: INFO: The status of Pod dns-6774 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Aug 9 23:42:34.304: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6774 PodName:dns-6774 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 9 23:42:34.304: INFO: >>> kubeConfig: /root/.kube/config I0809 23:42:34.340584 8 log.go:181] (0xc0012b48f0) (0xc0025add60) Create stream I0809 23:42:34.340627 8 log.go:181] (0xc0012b48f0) (0xc0025add60) Stream added, broadcasting: 1 I0809 23:42:34.345468 8 log.go:181] (0xc0012b48f0) Reply frame received for 1 I0809 23:42:34.345519 8 log.go:181] (0xc0012b48f0) (0xc00253a640) Create stream I0809 23:42:34.345537 8 log.go:181] (0xc0012b48f0) (0xc00253a640) Stream added, broadcasting: 3 I0809 23:42:34.346693 8 log.go:181] (0xc0012b48f0) Reply frame received for 3 I0809 23:42:34.346721 8 log.go:181] (0xc0012b48f0) (0xc00253a6e0) Create stream I0809 23:42:34.346730 8 log.go:181] (0xc0012b48f0) (0xc00253a6e0) Stream added, broadcasting: 5 I0809 23:42:34.347472 8 log.go:181] (0xc0012b48f0) Reply frame received for 5 I0809 23:42:34.430357 8 log.go:181] (0xc0012b48f0) Data frame received for 3 I0809 23:42:34.430390 8 log.go:181] (0xc00253a640) (3) Data frame handling I0809 23:42:34.430410 8 log.go:181] (0xc00253a640) (3) Data frame sent I0809 23:42:34.433755 8 log.go:181] (0xc0012b48f0) Data frame received for 5 I0809 23:42:34.433789 8 log.go:181] (0xc00253a6e0) (5) Data frame handling I0809 23:42:34.433815 8 log.go:181] (0xc0012b48f0) Data frame received for 3 I0809 23:42:34.433825 8 log.go:181] (0xc00253a640) (3) Data frame handling I0809 23:42:34.435655 8 log.go:181] (0xc0012b48f0) Data frame received for 1 I0809 23:42:34.435714 8 log.go:181] (0xc0025add60) (1) Data frame handling I0809 23:42:34.435750 8 log.go:181] (0xc0025add60) (1) Data frame sent I0809 23:42:34.435787 8 log.go:181] (0xc0012b48f0) (0xc0025add60) Stream removed, broadcasting: 1 I0809 23:42:34.435927 8 log.go:181] (0xc0012b48f0) Go away received I0809 23:42:34.436234 8 log.go:181] (0xc0012b48f0) (0xc0025add60) Stream removed, broadcasting: 1 I0809 23:42:34.436260 8 log.go:181] (0xc0012b48f0) (0xc00253a640) Stream removed, broadcasting: 3 I0809 23:42:34.436270 8 log.go:181] (0xc0012b48f0) (0xc00253a6e0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Aug 9 23:42:34.436: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6774 PodName:dns-6774 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 9 23:42:34.436: INFO: >>> kubeConfig: /root/.kube/config I0809 23:42:34.473359 8 log.go:181] (0xc000e1ed10) (0xc00253aa00) Create stream I0809 23:42:34.473391 8 log.go:181] (0xc000e1ed10) (0xc00253aa00) Stream added, broadcasting: 1 I0809 23:42:34.475907 8 log.go:181] (0xc000e1ed10) Reply frame received for 1 I0809 23:42:34.475958 8 log.go:181] (0xc000e1ed10) (0xc0025ade00) Create stream I0809 23:42:34.475969 8 log.go:181] (0xc000e1ed10) (0xc0025ade00) Stream added, broadcasting: 3 I0809 23:42:34.477288 8 log.go:181] (0xc000e1ed10) Reply frame received for 3 I0809 23:42:34.477327 8 log.go:181] (0xc000e1ed10) (0xc0025adea0) Create stream I0809 23:42:34.477342 8 log.go:181] (0xc000e1ed10) (0xc0025adea0) Stream added, broadcasting: 5 I0809 23:42:34.478370 8 log.go:181] (0xc000e1ed10) Reply frame received for 5 I0809 23:42:34.572671 8 log.go:181] (0xc000e1ed10) Data frame received for 3 I0809 23:42:34.572708 8 log.go:181] (0xc0025ade00) (3) Data frame handling I0809 23:42:34.572846 8 log.go:181] (0xc0025ade00) (3) Data frame sent I0809 23:42:34.576366 8 log.go:181] (0xc000e1ed10) Data frame received for 5 I0809 23:42:34.576384 8 log.go:181] (0xc0025adea0) (5) Data frame handling I0809 23:42:34.576711 8 log.go:181] (0xc000e1ed10) Data frame received for 3 I0809 23:42:34.576796 8 log.go:181] (0xc0025ade00) (3) Data frame handling I0809 23:42:34.578581 8 log.go:181] (0xc000e1ed10) Data frame received for 1 I0809 23:42:34.578640 8 log.go:181] (0xc00253aa00) (1) Data frame handling I0809 23:42:34.578664 8 log.go:181] (0xc00253aa00) (1) Data frame sent I0809 23:42:34.578683 8 log.go:181] (0xc000e1ed10) (0xc00253aa00) Stream removed, broadcasting: 1 I0809 23:42:34.578780 8 log.go:181] (0xc000e1ed10) Go away received I0809 23:42:34.578829 8 log.go:181] (0xc000e1ed10) (0xc00253aa00) Stream removed, broadcasting: 1 I0809 23:42:34.578922 8 log.go:181] (0xc000e1ed10) (0xc0025ade00) Stream removed, broadcasting: 3 I0809 23:42:34.578970 8 log.go:181] (0xc000e1ed10) (0xc0025adea0) Stream removed, broadcasting: 5 Aug 9 23:42:34.579: INFO: Deleting pod dns-6774... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:42:34.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6774" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":66,"skipped":1069,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:42:34.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:42:39.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7693" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":67,"skipped":1070,"failed":0} SSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:42:39.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:42:39.234: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-6390 I0809 23:42:39.261305 8 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6390, replica count: 1 I0809 23:42:40.311779 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0809 23:42:41.312028 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0809 23:42:42.312331 8 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 9 23:42:42.478: INFO: Created: latency-svc-r6psn Aug 9 23:42:42.501: INFO: Got endpoints: latency-svc-r6psn [88.49661ms] Aug 9 23:42:42.541: INFO: Created: latency-svc-kdfpk Aug 9 23:42:42.558: INFO: Got endpoints: latency-svc-kdfpk [57.577755ms] Aug 9 23:42:42.574: INFO: Created: latency-svc-f74nz Aug 9 23:42:42.610: INFO: Got endpoints: latency-svc-f74nz [109.14863ms] Aug 9 23:42:42.616: INFO: Created: latency-svc-7c2gk Aug 9 23:42:42.628: INFO: Got endpoints: latency-svc-7c2gk [126.844937ms] Aug 9 23:42:42.652: INFO: Created: latency-svc-ct48t Aug 9 23:42:42.665: INFO: Got endpoints: latency-svc-ct48t [163.261086ms] Aug 9 23:42:42.684: INFO: Created: latency-svc-hbnb2 Aug 9 23:42:42.745: INFO: Got endpoints: latency-svc-hbnb2 [243.45207ms] Aug 9 23:42:42.781: INFO: Created: latency-svc-xlj4r Aug 9 23:42:42.792: INFO: Got endpoints: latency-svc-xlj4r [289.968458ms] Aug 9 23:42:42.826: INFO: Created: latency-svc-svk64 Aug 9 23:42:42.874: INFO: Got endpoints: latency-svc-svk64 [372.500412ms] Aug 9 23:42:42.885: INFO: Created: latency-svc-z2pc9 Aug 9 23:42:42.923: INFO: Got endpoints: latency-svc-z2pc9 [421.126859ms] Aug 9 23:42:43.018: INFO: Created: latency-svc-tsjzz Aug 9 23:42:43.055: INFO: Got endpoints: latency-svc-tsjzz [553.867154ms] Aug 9 23:42:43.055: INFO: Created: latency-svc-v95cf Aug 9 23:42:43.075: INFO: Got endpoints: latency-svc-v95cf [572.281268ms] Aug 9 23:42:43.096: INFO: Created: latency-svc-j2fdh Aug 9 23:42:43.111: INFO: Got endpoints: latency-svc-j2fdh [608.363007ms] Aug 9 23:42:43.155: INFO: Created: latency-svc-ssfzd Aug 9 23:42:43.159: INFO: Got endpoints: latency-svc-ssfzd [656.021384ms] Aug 9 23:42:43.211: INFO: Created: latency-svc-b9s5j Aug 9 23:42:43.219: INFO: Got endpoints: latency-svc-b9s5j [716.633958ms] Aug 9 23:42:43.239: INFO: Created: latency-svc-q8h49 Aug 9 23:42:43.305: INFO: Got endpoints: latency-svc-q8h49 [804.186335ms] Aug 9 23:42:43.309: INFO: Created: latency-svc-rf9pg Aug 9 23:42:43.316: INFO: Got endpoints: latency-svc-rf9pg [813.118986ms] Aug 9 23:42:43.361: INFO: Created: latency-svc-q8mhb Aug 9 23:42:43.383: INFO: Got endpoints: latency-svc-q8mhb [825.079012ms] Aug 9 23:42:43.453: INFO: Created: latency-svc-kxg8p Aug 9 23:42:43.466: INFO: Got endpoints: latency-svc-kxg8p [856.310294ms] Aug 9 23:42:43.518: INFO: Created: latency-svc-dkhq7 Aug 9 23:42:43.574: INFO: Got endpoints: latency-svc-dkhq7 [945.814648ms] Aug 9 23:42:43.587: INFO: Created: latency-svc-tbpfg Aug 9 23:42:43.601: INFO: Got endpoints: latency-svc-tbpfg [935.352729ms] Aug 9 23:42:43.657: INFO: Created: latency-svc-fgctq Aug 9 23:42:43.673: INFO: Got endpoints: latency-svc-fgctq [927.549256ms] Aug 9 23:42:43.743: INFO: Created: latency-svc-fhmhm Aug 9 23:42:43.751: INFO: Got endpoints: latency-svc-fhmhm [958.039224ms] Aug 9 23:42:43.768: INFO: Created: latency-svc-n9g8f Aug 9 23:42:43.781: INFO: Got endpoints: latency-svc-n9g8f [907.494477ms] Aug 9 23:42:43.798: INFO: Created: latency-svc-c2vjg Aug 9 23:42:43.811: INFO: Got endpoints: latency-svc-c2vjg [888.360404ms] Aug 9 23:42:43.831: INFO: Created: latency-svc-ssjgc Aug 9 23:42:43.910: INFO: Got endpoints: latency-svc-ssjgc [855.126958ms] Aug 9 23:42:43.913: INFO: Created: latency-svc-n229q Aug 9 23:42:43.919: INFO: Got endpoints: latency-svc-n229q [844.457229ms] Aug 9 23:42:43.945: INFO: Created: latency-svc-wqbzq Aug 9 23:42:43.973: INFO: Got endpoints: latency-svc-wqbzq [861.745867ms] Aug 9 23:42:43.996: INFO: Created: latency-svc-58xl8 Aug 9 23:42:44.072: INFO: Got endpoints: latency-svc-58xl8 [913.64109ms] Aug 9 23:42:44.075: INFO: Created: latency-svc-6hwzp Aug 9 23:42:44.097: INFO: Got endpoints: latency-svc-6hwzp [877.453537ms] Aug 9 23:42:44.125: INFO: Created: latency-svc-4dw77 Aug 9 23:42:44.137: INFO: Got endpoints: latency-svc-4dw77 [831.512528ms] Aug 9 23:42:44.154: INFO: Created: latency-svc-t8545 Aug 9 23:42:44.168: INFO: Got endpoints: latency-svc-t8545 [851.874465ms] Aug 9 23:42:44.221: INFO: Created: latency-svc-xdlpn Aug 9 23:42:44.228: INFO: Got endpoints: latency-svc-xdlpn [844.134327ms] Aug 9 23:42:44.254: INFO: Created: latency-svc-jlcqk Aug 9 23:42:44.264: INFO: Got endpoints: latency-svc-jlcqk [797.260208ms] Aug 9 23:42:44.284: INFO: Created: latency-svc-pwql7 Aug 9 23:42:44.294: INFO: Got endpoints: latency-svc-pwql7 [719.350759ms] Aug 9 23:42:44.378: INFO: Created: latency-svc-l986g Aug 9 23:42:44.403: INFO: Got endpoints: latency-svc-l986g [802.088961ms] Aug 9 23:42:44.437: INFO: Created: latency-svc-vdjps Aug 9 23:42:44.457: INFO: Got endpoints: latency-svc-vdjps [783.905992ms] Aug 9 23:42:44.525: INFO: Created: latency-svc-gttlq Aug 9 23:42:44.547: INFO: Got endpoints: latency-svc-gttlq [796.76966ms] Aug 9 23:42:44.572: INFO: Created: latency-svc-rztjh Aug 9 23:42:44.583: INFO: Got endpoints: latency-svc-rztjh [801.681451ms] Aug 9 23:42:44.610: INFO: Created: latency-svc-m5vg9 Aug 9 23:42:44.664: INFO: Got endpoints: latency-svc-m5vg9 [853.060182ms] Aug 9 23:42:44.677: INFO: Created: latency-svc-9f8dw Aug 9 23:42:44.692: INFO: Got endpoints: latency-svc-9f8dw [782.279326ms] Aug 9 23:42:44.715: INFO: Created: latency-svc-bfznx Aug 9 23:42:44.728: INFO: Got endpoints: latency-svc-bfznx [808.518365ms] Aug 9 23:42:44.746: INFO: Created: latency-svc-8l9bt Aug 9 23:42:44.821: INFO: Got endpoints: latency-svc-8l9bt [848.634239ms] Aug 9 23:42:44.883: INFO: Created: latency-svc-hqw2n Aug 9 23:42:44.896: INFO: Got endpoints: latency-svc-hqw2n [824.028032ms] Aug 9 23:42:45.013: INFO: Created: latency-svc-jzwv4 Aug 9 23:42:45.065: INFO: Got endpoints: latency-svc-jzwv4 [968.427489ms] Aug 9 23:42:45.123: INFO: Created: latency-svc-htf9m Aug 9 23:42:45.143: INFO: Got endpoints: latency-svc-htf9m [1.005948645s] Aug 9 23:42:45.165: INFO: Created: latency-svc-xq9cr Aug 9 23:42:45.198: INFO: Got endpoints: latency-svc-xq9cr [1.03042161s] Aug 9 23:42:45.252: INFO: Created: latency-svc-b8kff Aug 9 23:42:45.263: INFO: Got endpoints: latency-svc-b8kff [1.035416948s] Aug 9 23:42:45.291: INFO: Created: latency-svc-2jx9q Aug 9 23:42:45.306: INFO: Got endpoints: latency-svc-2jx9q [1.042294275s] Aug 9 23:42:45.348: INFO: Created: latency-svc-n5mw7 Aug 9 23:42:45.432: INFO: Got endpoints: latency-svc-n5mw7 [1.137970984s] Aug 9 23:42:45.459: INFO: Created: latency-svc-gs7hw Aug 9 23:42:45.474: INFO: Got endpoints: latency-svc-gs7hw [1.070730563s] Aug 9 23:42:45.507: INFO: Created: latency-svc-l6kfw Aug 9 23:42:45.517: INFO: Got endpoints: latency-svc-l6kfw [1.060599679s] Aug 9 23:42:45.601: INFO: Created: latency-svc-b8t4q Aug 9 23:42:45.605: INFO: Got endpoints: latency-svc-b8t4q [1.058016612s] Aug 9 23:42:45.677: INFO: Created: latency-svc-spsml Aug 9 23:42:45.689: INFO: Got endpoints: latency-svc-spsml [1.106271647s] Aug 9 23:42:45.787: INFO: Created: latency-svc-4fwlr Aug 9 23:42:45.792: INFO: Got endpoints: latency-svc-4fwlr [1.127135289s] Aug 9 23:42:45.809: INFO: Created: latency-svc-f5658 Aug 9 23:42:45.823: INFO: Got endpoints: latency-svc-f5658 [1.130503451s] Aug 9 23:42:45.841: INFO: Created: latency-svc-rp2hq Aug 9 23:42:45.852: INFO: Got endpoints: latency-svc-rp2hq [1.124363718s] Aug 9 23:42:45.870: INFO: Created: latency-svc-qjcqf Aug 9 23:42:45.883: INFO: Got endpoints: latency-svc-qjcqf [1.061173606s] Aug 9 23:42:45.934: INFO: Created: latency-svc-4g9wx Aug 9 23:42:45.963: INFO: Got endpoints: latency-svc-4g9wx [1.066687869s] Aug 9 23:42:45.965: INFO: Created: latency-svc-8hqqr Aug 9 23:42:45.980: INFO: Got endpoints: latency-svc-8hqqr [914.321188ms] Aug 9 23:42:46.002: INFO: Created: latency-svc-v2ztt Aug 9 23:42:46.022: INFO: Got endpoints: latency-svc-v2ztt [879.289975ms] Aug 9 23:42:46.078: INFO: Created: latency-svc-gcwxh Aug 9 23:42:46.078: INFO: Got endpoints: latency-svc-gcwxh [880.270756ms] Aug 9 23:42:46.149: INFO: Created: latency-svc-f54bv Aug 9 23:42:46.166: INFO: Got endpoints: latency-svc-f54bv [903.292694ms] Aug 9 23:42:46.209: INFO: Created: latency-svc-qmcbn Aug 9 23:42:46.213: INFO: Got endpoints: latency-svc-qmcbn [907.080436ms] Aug 9 23:42:46.267: INFO: Created: latency-svc-gwvbt Aug 9 23:42:46.281: INFO: Got endpoints: latency-svc-gwvbt [849.025234ms] Aug 9 23:42:46.308: INFO: Created: latency-svc-lghsh Aug 9 23:42:46.390: INFO: Got endpoints: latency-svc-lghsh [915.859059ms] Aug 9 23:42:46.393: INFO: Created: latency-svc-cqmzj Aug 9 23:42:46.407: INFO: Got endpoints: latency-svc-cqmzj [889.017148ms] Aug 9 23:42:46.452: INFO: Created: latency-svc-kh954 Aug 9 23:42:46.467: INFO: Got endpoints: latency-svc-kh954 [861.73027ms] Aug 9 23:42:46.531: INFO: Created: latency-svc-9lp5g Aug 9 23:42:46.551: INFO: Got endpoints: latency-svc-9lp5g [861.393672ms] Aug 9 23:42:46.569: INFO: Created: latency-svc-2dxz6 Aug 9 23:42:46.582: INFO: Got endpoints: latency-svc-2dxz6 [790.181136ms] Aug 9 23:42:46.599: INFO: Created: latency-svc-czgc9 Aug 9 23:42:46.677: INFO: Got endpoints: latency-svc-czgc9 [854.125066ms] Aug 9 23:42:46.703: INFO: Created: latency-svc-hzkrh Aug 9 23:42:46.720: INFO: Got endpoints: latency-svc-hzkrh [867.94371ms] Aug 9 23:42:46.746: INFO: Created: latency-svc-brhqq Aug 9 23:42:46.844: INFO: Got endpoints: latency-svc-brhqq [960.675905ms] Aug 9 23:42:46.847: INFO: Created: latency-svc-xxttw Aug 9 23:42:46.858: INFO: Got endpoints: latency-svc-xxttw [894.836006ms] Aug 9 23:42:46.875: INFO: Created: latency-svc-lqfhv Aug 9 23:42:46.908: INFO: Got endpoints: latency-svc-lqfhv [928.045806ms] Aug 9 23:42:47.013: INFO: Created: latency-svc-6lmct Aug 9 23:42:47.021: INFO: Got endpoints: latency-svc-6lmct [999.227296ms] Aug 9 23:42:47.043: INFO: Created: latency-svc-5rzgb Aug 9 23:42:47.079: INFO: Got endpoints: latency-svc-5rzgb [1.000435218s] Aug 9 23:42:47.169: INFO: Created: latency-svc-rmh4f Aug 9 23:42:47.172: INFO: Got endpoints: latency-svc-rmh4f [1.00526706s] Aug 9 23:42:47.195: INFO: Created: latency-svc-p8gmk Aug 9 23:42:47.208: INFO: Got endpoints: latency-svc-p8gmk [994.861845ms] Aug 9 23:42:47.226: INFO: Created: latency-svc-tjnwz Aug 9 23:42:47.238: INFO: Got endpoints: latency-svc-tjnwz [957.1277ms] Aug 9 23:42:47.257: INFO: Created: latency-svc-c7xl6 Aug 9 23:42:47.305: INFO: Got endpoints: latency-svc-c7xl6 [915.828497ms] Aug 9 23:42:47.318: INFO: Created: latency-svc-wmz8j Aug 9 23:42:47.353: INFO: Got endpoints: latency-svc-wmz8j [946.351937ms] Aug 9 23:42:47.379: INFO: Created: latency-svc-6pjbq Aug 9 23:42:47.396: INFO: Got endpoints: latency-svc-6pjbq [928.707721ms] Aug 9 23:42:47.497: INFO: Created: latency-svc-zdgzk Aug 9 23:42:47.501: INFO: Got endpoints: latency-svc-zdgzk [949.83955ms] Aug 9 23:42:47.670: INFO: Created: latency-svc-cqm44 Aug 9 23:42:47.678: INFO: Got endpoints: latency-svc-cqm44 [1.096400617s] Aug 9 23:42:47.708: INFO: Created: latency-svc-6lgcc Aug 9 23:42:47.734: INFO: Got endpoints: latency-svc-6lgcc [1.056792087s] Aug 9 23:42:47.751: INFO: Created: latency-svc-gfnz8 Aug 9 23:42:47.763: INFO: Got endpoints: latency-svc-gfnz8 [1.042092873s] Aug 9 23:42:47.820: INFO: Created: latency-svc-97r6v Aug 9 23:42:47.841: INFO: Got endpoints: latency-svc-97r6v [997.446578ms] Aug 9 23:42:47.880: INFO: Created: latency-svc-7jpjt Aug 9 23:42:47.895: INFO: Got endpoints: latency-svc-7jpjt [1.037074561s] Aug 9 23:42:48.000: INFO: Created: latency-svc-9dx7k Aug 9 23:42:48.004: INFO: Got endpoints: latency-svc-9dx7k [1.096676289s] Aug 9 23:42:48.039: INFO: Created: latency-svc-jkzcz Aug 9 23:42:48.063: INFO: Got endpoints: latency-svc-jkzcz [1.041119231s] Aug 9 23:42:48.090: INFO: Created: latency-svc-8z2vk Aug 9 23:42:48.161: INFO: Got endpoints: latency-svc-8z2vk [1.082393886s] Aug 9 23:42:48.165: INFO: Created: latency-svc-swjxb Aug 9 23:42:48.172: INFO: Got endpoints: latency-svc-swjxb [999.947804ms] Aug 9 23:42:48.191: INFO: Created: latency-svc-qw889 Aug 9 23:42:48.214: INFO: Got endpoints: latency-svc-qw889 [1.006163418s] Aug 9 23:42:48.255: INFO: Created: latency-svc-svsxz Aug 9 23:42:48.317: INFO: Got endpoints: latency-svc-svsxz [1.078776823s] Aug 9 23:42:48.320: INFO: Created: latency-svc-cd6p5 Aug 9 23:42:48.353: INFO: Got endpoints: latency-svc-cd6p5 [1.047106278s] Aug 9 23:42:48.402: INFO: Created: latency-svc-5vs5c Aug 9 23:42:48.414: INFO: Got endpoints: latency-svc-5vs5c [1.060550092s] Aug 9 23:42:48.466: INFO: Created: latency-svc-6xcg8 Aug 9 23:42:48.470: INFO: Got endpoints: latency-svc-6xcg8 [1.073977746s] Aug 9 23:42:48.507: INFO: Created: latency-svc-ck6lz Aug 9 23:42:48.522: INFO: Got endpoints: latency-svc-ck6lz [1.020996972s] Aug 9 23:42:48.557: INFO: Created: latency-svc-qxvqt Aug 9 23:42:48.622: INFO: Got endpoints: latency-svc-qxvqt [943.936161ms] Aug 9 23:42:48.629: INFO: Created: latency-svc-bfkcv Aug 9 23:42:48.656: INFO: Got endpoints: latency-svc-bfkcv [922.638267ms] Aug 9 23:42:48.683: INFO: Created: latency-svc-9zxjx Aug 9 23:42:48.691: INFO: Got endpoints: latency-svc-9zxjx [927.981132ms] Aug 9 23:42:48.773: INFO: Created: latency-svc-qrlxb Aug 9 23:42:48.787: INFO: Got endpoints: latency-svc-qrlxb [945.517911ms] Aug 9 23:42:48.845: INFO: Created: latency-svc-k8zh9 Aug 9 23:42:48.869: INFO: Got endpoints: latency-svc-k8zh9 [974.06493ms] Aug 9 23:42:48.925: INFO: Created: latency-svc-h4lmj Aug 9 23:42:48.931: INFO: Got endpoints: latency-svc-h4lmj [926.537967ms] Aug 9 23:42:48.968: INFO: Created: latency-svc-477p7 Aug 9 23:42:48.980: INFO: Got endpoints: latency-svc-477p7 [916.845381ms] Aug 9 23:42:49.007: INFO: Created: latency-svc-42zgl Aug 9 23:42:49.096: INFO: Got endpoints: latency-svc-42zgl [934.187166ms] Aug 9 23:42:49.098: INFO: Created: latency-svc-rk96r Aug 9 23:42:49.106: INFO: Got endpoints: latency-svc-rk96r [934.631441ms] Aug 9 23:42:49.167: INFO: Created: latency-svc-jpvdh Aug 9 23:42:49.239: INFO: Got endpoints: latency-svc-jpvdh [1.024883998s] Aug 9 23:42:49.250: INFO: Created: latency-svc-ztqtx Aug 9 23:42:49.262: INFO: Got endpoints: latency-svc-ztqtx [944.852162ms] Aug 9 23:42:49.283: INFO: Created: latency-svc-pkbcs Aug 9 23:42:49.304: INFO: Got endpoints: latency-svc-pkbcs [951.777522ms] Aug 9 23:42:49.407: INFO: Created: latency-svc-6zcsx Aug 9 23:42:49.411: INFO: Got endpoints: latency-svc-6zcsx [997.824095ms] Aug 9 23:42:49.472: INFO: Created: latency-svc-hst8l Aug 9 23:42:49.484: INFO: Got endpoints: latency-svc-hst8l [1.014159731s] Aug 9 23:42:49.502: INFO: Created: latency-svc-mshzg Aug 9 23:42:49.580: INFO: Got endpoints: latency-svc-mshzg [1.05827823s] Aug 9 23:42:49.601: INFO: Created: latency-svc-thhxn Aug 9 23:42:49.611: INFO: Got endpoints: latency-svc-thhxn [988.335103ms] Aug 9 23:42:49.642: INFO: Created: latency-svc-xsc5w Aug 9 23:42:49.678: INFO: Got endpoints: latency-svc-xsc5w [1.02144103s] Aug 9 23:42:49.762: INFO: Created: latency-svc-dt52r Aug 9 23:42:49.786: INFO: Got endpoints: latency-svc-dt52r [1.095641802s] Aug 9 23:42:49.817: INFO: Created: latency-svc-bw7qp Aug 9 23:42:49.828: INFO: Got endpoints: latency-svc-bw7qp [1.041168894s] Aug 9 23:42:49.889: INFO: Created: latency-svc-n5lxq Aug 9 23:42:49.901: INFO: Got endpoints: latency-svc-n5lxq [1.031389829s] Aug 9 23:42:49.946: INFO: Created: latency-svc-jg9tx Aug 9 23:42:50.035: INFO: Got endpoints: latency-svc-jg9tx [1.104221533s] Aug 9 23:42:50.051: INFO: Created: latency-svc-7n62w Aug 9 23:42:50.063: INFO: Got endpoints: latency-svc-7n62w [1.083273484s] Aug 9 23:42:50.081: INFO: Created: latency-svc-qvtkm Aug 9 23:42:50.093: INFO: Got endpoints: latency-svc-qvtkm [997.376044ms] Aug 9 23:42:50.167: INFO: Created: latency-svc-n8bgp Aug 9 23:42:50.172: INFO: Got endpoints: latency-svc-n8bgp [1.065657839s] Aug 9 23:42:50.198: INFO: Created: latency-svc-sqjlj Aug 9 23:42:50.214: INFO: Got endpoints: latency-svc-sqjlj [974.686506ms] Aug 9 23:42:50.234: INFO: Created: latency-svc-9vfms Aug 9 23:42:50.250: INFO: Got endpoints: latency-svc-9vfms [988.340036ms] Aug 9 23:42:50.324: INFO: Created: latency-svc-wqhgb Aug 9 23:42:50.328: INFO: Got endpoints: latency-svc-wqhgb [1.023629639s] Aug 9 23:42:50.402: INFO: Created: latency-svc-snvrj Aug 9 23:42:50.419: INFO: Got endpoints: latency-svc-snvrj [1.007151656s] Aug 9 23:42:50.449: INFO: Created: latency-svc-9fcnr Aug 9 23:42:50.460: INFO: Got endpoints: latency-svc-9fcnr [975.814421ms] Aug 9 23:42:50.486: INFO: Created: latency-svc-jh5kz Aug 9 23:42:50.502: INFO: Got endpoints: latency-svc-jh5kz [922.213733ms] Aug 9 23:42:50.616: INFO: Created: latency-svc-xhmdk Aug 9 23:42:50.619: INFO: Got endpoints: latency-svc-xhmdk [1.008292765s] Aug 9 23:42:50.641: INFO: Created: latency-svc-jzj5x Aug 9 23:42:50.678: INFO: Got endpoints: latency-svc-jzj5x [999.93583ms] Aug 9 23:42:50.701: INFO: Created: latency-svc-n5z4g Aug 9 23:42:50.714: INFO: Got endpoints: latency-svc-n5z4g [927.148429ms] Aug 9 23:42:50.784: INFO: Created: latency-svc-cmcqm Aug 9 23:42:50.805: INFO: Got endpoints: latency-svc-cmcqm [976.648527ms] Aug 9 23:42:50.837: INFO: Created: latency-svc-qscrp Aug 9 23:42:50.852: INFO: Got endpoints: latency-svc-qscrp [951.137327ms] Aug 9 23:42:50.869: INFO: Created: latency-svc-6s4s5 Aug 9 23:42:50.882: INFO: Got endpoints: latency-svc-6s4s5 [846.956102ms] Aug 9 23:42:50.935: INFO: Created: latency-svc-twf97 Aug 9 23:42:50.947: INFO: Got endpoints: latency-svc-twf97 [884.055854ms] Aug 9 23:42:51.004: INFO: Created: latency-svc-lfjv7 Aug 9 23:42:51.028: INFO: Got endpoints: latency-svc-lfjv7 [934.302244ms] Aug 9 23:42:51.140: INFO: Created: latency-svc-bw5bq Aug 9 23:42:51.147: INFO: Got endpoints: latency-svc-bw5bq [975.165936ms] Aug 9 23:42:51.169: INFO: Created: latency-svc-9tq74 Aug 9 23:42:51.177: INFO: Got endpoints: latency-svc-9tq74 [963.384816ms] Aug 9 23:42:51.196: INFO: Created: latency-svc-xwfzc Aug 9 23:42:51.233: INFO: Got endpoints: latency-svc-xwfzc [982.847313ms] Aug 9 23:42:51.270: INFO: Created: latency-svc-zfzd9 Aug 9 23:42:51.293: INFO: Got endpoints: latency-svc-zfzd9 [965.035578ms] Aug 9 23:42:51.313: INFO: Created: latency-svc-wjljj Aug 9 23:42:51.329: INFO: Got endpoints: latency-svc-wjljj [910.020922ms] Aug 9 23:42:51.374: INFO: Created: latency-svc-b2dj9 Aug 9 23:42:51.383: INFO: Got endpoints: latency-svc-b2dj9 [922.425263ms] Aug 9 23:42:51.400: INFO: Created: latency-svc-wscws Aug 9 23:42:51.407: INFO: Got endpoints: latency-svc-wscws [904.246029ms] Aug 9 23:42:51.430: INFO: Created: latency-svc-vf9rs Aug 9 23:42:51.462: INFO: Got endpoints: latency-svc-vf9rs [842.539466ms] Aug 9 23:42:51.520: INFO: Created: latency-svc-zsv74 Aug 9 23:42:51.526: INFO: Got endpoints: latency-svc-zsv74 [848.001252ms] Aug 9 23:42:51.559: INFO: Created: latency-svc-xmtdg Aug 9 23:42:51.590: INFO: Got endpoints: latency-svc-xmtdg [876.050656ms] Aug 9 23:42:51.689: INFO: Created: latency-svc-qr69g Aug 9 23:42:51.737: INFO: Got endpoints: latency-svc-qr69g [932.131621ms] Aug 9 23:42:51.737: INFO: Created: latency-svc-chcw5 Aug 9 23:42:51.773: INFO: Got endpoints: latency-svc-chcw5 [920.970611ms] Aug 9 23:42:51.826: INFO: Created: latency-svc-2zzlv Aug 9 23:42:51.830: INFO: Got endpoints: latency-svc-2zzlv [947.599911ms] Aug 9 23:42:51.884: INFO: Created: latency-svc-sgwvh Aug 9 23:42:51.894: INFO: Got endpoints: latency-svc-sgwvh [947.137172ms] Aug 9 23:42:51.916: INFO: Created: latency-svc-qf24g Aug 9 23:42:51.964: INFO: Got endpoints: latency-svc-qf24g [935.971117ms] Aug 9 23:42:51.976: INFO: Created: latency-svc-9hzp8 Aug 9 23:42:51.992: INFO: Got endpoints: latency-svc-9hzp8 [844.415957ms] Aug 9 23:42:52.013: INFO: Created: latency-svc-rb554 Aug 9 23:42:52.028: INFO: Got endpoints: latency-svc-rb554 [850.633073ms] Aug 9 23:42:52.045: INFO: Created: latency-svc-dg456 Aug 9 23:42:52.058: INFO: Got endpoints: latency-svc-dg456 [824.975008ms] Aug 9 23:42:52.107: INFO: Created: latency-svc-z2597 Aug 9 23:42:52.129: INFO: Got endpoints: latency-svc-z2597 [836.024679ms] Aug 9 23:42:52.130: INFO: Created: latency-svc-qkppb Aug 9 23:42:52.155: INFO: Got endpoints: latency-svc-qkppb [826.505401ms] Aug 9 23:42:52.240: INFO: Created: latency-svc-2dbft Aug 9 23:42:52.244: INFO: Got endpoints: latency-svc-2dbft [861.225056ms] Aug 9 23:42:52.270: INFO: Created: latency-svc-dwlhr Aug 9 23:42:52.287: INFO: Got endpoints: latency-svc-dwlhr [880.385434ms] Aug 9 23:42:52.303: INFO: Created: latency-svc-x84mq Aug 9 23:42:52.413: INFO: Got endpoints: latency-svc-x84mq [951.171418ms] Aug 9 23:42:52.419: INFO: Created: latency-svc-r7bq2 Aug 9 23:42:52.426: INFO: Got endpoints: latency-svc-r7bq2 [900.040757ms] Aug 9 23:42:52.444: INFO: Created: latency-svc-c4msh Aug 9 23:42:52.456: INFO: Got endpoints: latency-svc-c4msh [865.973587ms] Aug 9 23:42:52.474: INFO: Created: latency-svc-mjzwn Aug 9 23:42:52.486: INFO: Got endpoints: latency-svc-mjzwn [749.264373ms] Aug 9 23:42:52.571: INFO: Created: latency-svc-q478n Aug 9 23:42:52.588: INFO: Got endpoints: latency-svc-q478n [814.577256ms] Aug 9 23:42:52.630: INFO: Created: latency-svc-4fxr2 Aug 9 23:42:52.648: INFO: Got endpoints: latency-svc-4fxr2 [817.392737ms] Aug 9 23:42:52.725: INFO: Created: latency-svc-79ks2 Aug 9 23:42:52.747: INFO: Got endpoints: latency-svc-79ks2 [852.911162ms] Aug 9 23:42:52.778: INFO: Created: latency-svc-n56cj Aug 9 23:42:52.800: INFO: Got endpoints: latency-svc-n56cj [835.985561ms] Aug 9 23:42:52.875: INFO: Created: latency-svc-hd9tl Aug 9 23:42:52.878: INFO: Got endpoints: latency-svc-hd9tl [886.573275ms] Aug 9 23:42:52.941: INFO: Created: latency-svc-xsc4h Aug 9 23:42:52.944: INFO: Got endpoints: latency-svc-xsc4h [915.87551ms] Aug 9 23:42:53.051: INFO: Created: latency-svc-9zjms Aug 9 23:42:53.054: INFO: Got endpoints: latency-svc-9zjms [995.485205ms] Aug 9 23:42:53.110: INFO: Created: latency-svc-h9cqf Aug 9 23:42:53.130: INFO: Got endpoints: latency-svc-h9cqf [1.000686428s] Aug 9 23:42:53.209: INFO: Created: latency-svc-6rpvq Aug 9 23:42:53.220: INFO: Got endpoints: latency-svc-6rpvq [1.064621762s] Aug 9 23:42:53.251: INFO: Created: latency-svc-hgprf Aug 9 23:42:53.263: INFO: Got endpoints: latency-svc-hgprf [1.018835778s] Aug 9 23:42:53.347: INFO: Created: latency-svc-kjrl6 Aug 9 23:42:53.386: INFO: Got endpoints: latency-svc-kjrl6 [1.099185148s] Aug 9 23:42:53.387: INFO: Created: latency-svc-wp79c Aug 9 23:42:53.422: INFO: Got endpoints: latency-svc-wp79c [1.008710797s] Aug 9 23:42:53.503: INFO: Created: latency-svc-jpzk8 Aug 9 23:42:53.508: INFO: Got endpoints: latency-svc-jpzk8 [1.081758223s] Aug 9 23:42:53.551: INFO: Created: latency-svc-sss5g Aug 9 23:42:53.569: INFO: Got endpoints: latency-svc-sss5g [1.113600884s] Aug 9 23:42:53.658: INFO: Created: latency-svc-pdnb9 Aug 9 23:42:53.695: INFO: Got endpoints: latency-svc-pdnb9 [1.209237909s] Aug 9 23:42:53.737: INFO: Created: latency-svc-bjldk Aug 9 23:42:53.750: INFO: Got endpoints: latency-svc-bjldk [1.162081175s] Aug 9 23:42:53.810: INFO: Created: latency-svc-lcl97 Aug 9 23:42:53.814: INFO: Got endpoints: latency-svc-lcl97 [1.166766718s] Aug 9 23:42:53.848: INFO: Created: latency-svc-pzwwc Aug 9 23:42:53.872: INFO: Got endpoints: latency-svc-pzwwc [1.12433976s] Aug 9 23:42:53.902: INFO: Created: latency-svc-l8447 Aug 9 23:42:53.982: INFO: Got endpoints: latency-svc-l8447 [1.182586859s] Aug 9 23:42:54.013: INFO: Created: latency-svc-dgw7z Aug 9 23:42:54.026: INFO: Got endpoints: latency-svc-dgw7z [1.147627122s] Aug 9 23:42:54.049: INFO: Created: latency-svc-pxw6r Aug 9 23:42:54.076: INFO: Got endpoints: latency-svc-pxw6r [1.131465698s] Aug 9 23:42:54.154: INFO: Created: latency-svc-gkzjb Aug 9 23:42:54.165: INFO: Got endpoints: latency-svc-gkzjb [1.111216921s] Aug 9 23:42:54.183: INFO: Created: latency-svc-hps89 Aug 9 23:42:54.195: INFO: Got endpoints: latency-svc-hps89 [1.065105068s] Aug 9 23:42:54.216: INFO: Created: latency-svc-44l5h Aug 9 23:42:54.264: INFO: Got endpoints: latency-svc-44l5h [1.043471362s] Aug 9 23:42:54.300: INFO: Created: latency-svc-gptpt Aug 9 23:42:54.310: INFO: Got endpoints: latency-svc-gptpt [1.046970974s] Aug 9 23:42:54.444: INFO: Created: latency-svc-7x7w6 Aug 9 23:42:54.481: INFO: Got endpoints: latency-svc-7x7w6 [1.094152012s] Aug 9 23:42:54.481: INFO: Created: latency-svc-d8hww Aug 9 23:42:54.505: INFO: Got endpoints: latency-svc-d8hww [1.082957526s] Aug 9 23:42:54.585: INFO: Created: latency-svc-wbmvs Aug 9 23:42:54.609: INFO: Got endpoints: latency-svc-wbmvs [1.101347816s] Aug 9 23:42:54.724: INFO: Created: latency-svc-cvgsk Aug 9 23:42:54.728: INFO: Got endpoints: latency-svc-cvgsk [1.158697631s] Aug 9 23:42:54.789: INFO: Created: latency-svc-zvclv Aug 9 23:42:54.802: INFO: Got endpoints: latency-svc-zvclv [1.106840319s] Aug 9 23:42:54.869: INFO: Created: latency-svc-zjbff Aug 9 23:42:54.925: INFO: Got endpoints: latency-svc-zjbff [1.174773962s] Aug 9 23:42:54.925: INFO: Created: latency-svc-nb2r9 Aug 9 23:42:54.941: INFO: Got endpoints: latency-svc-nb2r9 [1.126688269s] Aug 9 23:42:54.966: INFO: Created: latency-svc-hrxx2 Aug 9 23:42:55.029: INFO: Got endpoints: latency-svc-hrxx2 [1.157566633s] Aug 9 23:42:55.041: INFO: Created: latency-svc-gfdgs Aug 9 23:42:55.069: INFO: Got endpoints: latency-svc-gfdgs [1.086235539s] Aug 9 23:42:55.102: INFO: Created: latency-svc-s9xxs Aug 9 23:42:55.117: INFO: Got endpoints: latency-svc-s9xxs [1.090580151s] Aug 9 23:42:55.206: INFO: Created: latency-svc-jksrm Aug 9 23:42:55.218: INFO: Got endpoints: latency-svc-jksrm [1.141944551s] Aug 9 23:42:55.236: INFO: Created: latency-svc-m827v Aug 9 23:42:55.248: INFO: Got endpoints: latency-svc-m827v [1.08338417s] Aug 9 23:42:55.267: INFO: Created: latency-svc-w9xnx Aug 9 23:42:55.305: INFO: Got endpoints: latency-svc-w9xnx [1.109707814s] Aug 9 23:42:55.335: INFO: Created: latency-svc-9vlqf Aug 9 23:42:55.351: INFO: Got endpoints: latency-svc-9vlqf [1.086902177s] Aug 9 23:42:55.351: INFO: Latencies: [57.577755ms 109.14863ms 126.844937ms 163.261086ms 243.45207ms 289.968458ms 372.500412ms 421.126859ms 553.867154ms 572.281268ms 608.363007ms 656.021384ms 716.633958ms 719.350759ms 749.264373ms 782.279326ms 783.905992ms 790.181136ms 796.76966ms 797.260208ms 801.681451ms 802.088961ms 804.186335ms 808.518365ms 813.118986ms 814.577256ms 817.392737ms 824.028032ms 824.975008ms 825.079012ms 826.505401ms 831.512528ms 835.985561ms 836.024679ms 842.539466ms 844.134327ms 844.415957ms 844.457229ms 846.956102ms 848.001252ms 848.634239ms 849.025234ms 850.633073ms 851.874465ms 852.911162ms 853.060182ms 854.125066ms 855.126958ms 856.310294ms 861.225056ms 861.393672ms 861.73027ms 861.745867ms 865.973587ms 867.94371ms 876.050656ms 877.453537ms 879.289975ms 880.270756ms 880.385434ms 884.055854ms 886.573275ms 888.360404ms 889.017148ms 894.836006ms 900.040757ms 903.292694ms 904.246029ms 907.080436ms 907.494477ms 910.020922ms 913.64109ms 914.321188ms 915.828497ms 915.859059ms 915.87551ms 916.845381ms 920.970611ms 922.213733ms 922.425263ms 922.638267ms 926.537967ms 927.148429ms 927.549256ms 927.981132ms 928.045806ms 928.707721ms 932.131621ms 934.187166ms 934.302244ms 934.631441ms 935.352729ms 935.971117ms 943.936161ms 944.852162ms 945.517911ms 945.814648ms 946.351937ms 947.137172ms 947.599911ms 949.83955ms 951.137327ms 951.171418ms 951.777522ms 957.1277ms 958.039224ms 960.675905ms 963.384816ms 965.035578ms 968.427489ms 974.06493ms 974.686506ms 975.165936ms 975.814421ms 976.648527ms 982.847313ms 988.335103ms 988.340036ms 994.861845ms 995.485205ms 997.376044ms 997.446578ms 997.824095ms 999.227296ms 999.93583ms 999.947804ms 1.000435218s 1.000686428s 1.00526706s 1.005948645s 1.006163418s 1.007151656s 1.008292765s 1.008710797s 1.014159731s 1.018835778s 1.020996972s 1.02144103s 1.023629639s 1.024883998s 1.03042161s 1.031389829s 1.035416948s 1.037074561s 1.041119231s 1.041168894s 1.042092873s 1.042294275s 1.043471362s 1.046970974s 1.047106278s 1.056792087s 1.058016612s 1.05827823s 1.060550092s 1.060599679s 1.061173606s 1.064621762s 1.065105068s 1.065657839s 1.066687869s 1.070730563s 1.073977746s 1.078776823s 1.081758223s 1.082393886s 1.082957526s 1.083273484s 1.08338417s 1.086235539s 1.086902177s 1.090580151s 1.094152012s 1.095641802s 1.096400617s 1.096676289s 1.099185148s 1.101347816s 1.104221533s 1.106271647s 1.106840319s 1.109707814s 1.111216921s 1.113600884s 1.12433976s 1.124363718s 1.126688269s 1.127135289s 1.130503451s 1.131465698s 1.137970984s 1.141944551s 1.147627122s 1.157566633s 1.158697631s 1.162081175s 1.166766718s 1.174773962s 1.182586859s 1.209237909s] Aug 9 23:42:55.351: INFO: 50 %ile: 949.83955ms Aug 9 23:42:55.351: INFO: 90 %ile: 1.106840319s Aug 9 23:42:55.351: INFO: 99 %ile: 1.182586859s Aug 9 23:42:55.351: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:42:55.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6390" for this suite. • [SLOW TEST:16.313 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":68,"skipped":1076,"failed":0} [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:42:55.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:43:00.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6376" for this suite. • [SLOW TEST:5.226 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":69,"skipped":1076,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:43:00.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 9 23:43:01.945: INFO: Waiting up to 5m0s for pod "pod-78a5ce4d-a6c0-49db-bb4e-333177b43b7c" in namespace "emptydir-4974" to be "Succeeded or Failed" Aug 9 23:43:01.991: INFO: Pod "pod-78a5ce4d-a6c0-49db-bb4e-333177b43b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.279119ms Aug 9 23:43:04.216: INFO: Pod "pod-78a5ce4d-a6c0-49db-bb4e-333177b43b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270373865s Aug 9 23:43:06.379: INFO: Pod "pod-78a5ce4d-a6c0-49db-bb4e-333177b43b7c": Phase="Running", Reason="", readiness=true. Elapsed: 4.433313083s Aug 9 23:43:08.515: INFO: Pod "pod-78a5ce4d-a6c0-49db-bb4e-333177b43b7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.569925759s STEP: Saw pod success Aug 9 23:43:08.515: INFO: Pod "pod-78a5ce4d-a6c0-49db-bb4e-333177b43b7c" satisfied condition "Succeeded or Failed" Aug 9 23:43:08.550: INFO: Trying to get logs from node latest-worker2 pod pod-78a5ce4d-a6c0-49db-bb4e-333177b43b7c container test-container: STEP: delete the pod Aug 9 23:43:08.971: INFO: Waiting for pod pod-78a5ce4d-a6c0-49db-bb4e-333177b43b7c to disappear Aug 9 23:43:08.994: INFO: Pod pod-78a5ce4d-a6c0-49db-bb4e-333177b43b7c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:43:08.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4974" for this suite. • [SLOW TEST:8.356 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":70,"skipped":1077,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:43:09.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 9 23:43:09.161: INFO: Waiting up to 5m0s for pod "downward-api-3f82ec6a-0bba-4a30-ae4f-13aac9385ba7" in namespace "downward-api-4234" to be "Succeeded or Failed" Aug 9 23:43:09.205: INFO: Pod "downward-api-3f82ec6a-0bba-4a30-ae4f-13aac9385ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 43.993148ms Aug 9 23:43:11.317: INFO: Pod "downward-api-3f82ec6a-0bba-4a30-ae4f-13aac9385ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156075898s Aug 9 23:43:13.363: INFO: Pod "downward-api-3f82ec6a-0bba-4a30-ae4f-13aac9385ba7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.201682715s STEP: Saw pod success Aug 9 23:43:13.363: INFO: Pod "downward-api-3f82ec6a-0bba-4a30-ae4f-13aac9385ba7" satisfied condition "Succeeded or Failed" Aug 9 23:43:13.373: INFO: Trying to get logs from node latest-worker2 pod downward-api-3f82ec6a-0bba-4a30-ae4f-13aac9385ba7 container dapi-container: STEP: delete the pod Aug 9 23:43:13.966: INFO: Waiting for pod downward-api-3f82ec6a-0bba-4a30-ae4f-13aac9385ba7 to disappear Aug 9 23:43:13.995: INFO: Pod downward-api-3f82ec6a-0bba-4a30-ae4f-13aac9385ba7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:43:13.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4234" for this suite. • [SLOW TEST:5.071 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":71,"skipped":1086,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:43:14.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 9 23:43:19.981: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:43:20.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3976" for this suite. • [SLOW TEST:6.143 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":72,"skipped":1093,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:43:20.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-3270 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3270 STEP: Deleting pre-stop pod Aug 9 23:43:33.528: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:43:33.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3270" for this suite. • [SLOW TEST:13.414 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":73,"skipped":1104,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:43:33.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-9a30a18e-82ed-4267-a461-a9d7dcd411eb STEP: Creating a pod to test consume secrets Aug 9 23:43:34.116: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a8fb2f3b-bb05-469d-aa94-c495383e65b2" in namespace "projected-5724" to be "Succeeded or Failed" Aug 9 23:43:34.216: INFO: Pod "pod-projected-secrets-a8fb2f3b-bb05-469d-aa94-c495383e65b2": Phase="Pending", Reason="", readiness=false. Elapsed: 99.794164ms Aug 9 23:43:36.290: INFO: Pod "pod-projected-secrets-a8fb2f3b-bb05-469d-aa94-c495383e65b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173318966s Aug 9 23:43:38.330: INFO: Pod "pod-projected-secrets-a8fb2f3b-bb05-469d-aa94-c495383e65b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.21405391s Aug 9 23:43:40.334: INFO: Pod "pod-projected-secrets-a8fb2f3b-bb05-469d-aa94-c495383e65b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.218065812s STEP: Saw pod success Aug 9 23:43:40.334: INFO: Pod "pod-projected-secrets-a8fb2f3b-bb05-469d-aa94-c495383e65b2" satisfied condition "Succeeded or Failed" Aug 9 23:43:40.337: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-a8fb2f3b-bb05-469d-aa94-c495383e65b2 container projected-secret-volume-test: STEP: delete the pod Aug 9 23:43:40.371: INFO: Waiting for pod pod-projected-secrets-a8fb2f3b-bb05-469d-aa94-c495383e65b2 to disappear Aug 9 23:43:40.425: INFO: Pod pod-projected-secrets-a8fb2f3b-bb05-469d-aa94-c495383e65b2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:43:40.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5724" for this suite. • [SLOW TEST:6.765 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":74,"skipped":1121,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:43:40.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-6233 STEP: creating replication controller nodeport-test in namespace services-6233 I0809 23:43:40.598155 8 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-6233, replica count: 2 I0809 23:43:43.648545 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0809 23:43:46.648877 8 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 9 23:43:46.648: INFO: Creating new exec pod Aug 9 23:43:51.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-6233 execpod5mrkn -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Aug 9 23:43:51.895: INFO: stderr: "I0809 23:43:51.824164 231 log.go:181] (0xc0001c8000) (0xc000c01180) Create stream\nI0809 23:43:51.824209 231 log.go:181] (0xc0001c8000) (0xc000c01180) Stream added, broadcasting: 1\nI0809 23:43:51.826114 231 log.go:181] (0xc0001c8000) Reply frame received for 1\nI0809 23:43:51.826149 231 log.go:181] (0xc0001c8000) (0xc000830960) Create stream\nI0809 23:43:51.826157 231 log.go:181] (0xc0001c8000) (0xc000830960) Stream added, broadcasting: 3\nI0809 23:43:51.827217 231 log.go:181] (0xc0001c8000) Reply frame received for 3\nI0809 23:43:51.827265 231 log.go:181] (0xc0001c8000) (0xc0007f6000) Create stream\nI0809 23:43:51.827292 231 log.go:181] (0xc0001c8000) (0xc0007f6000) Stream added, broadcasting: 5\nI0809 23:43:51.828228 231 log.go:181] (0xc0001c8000) Reply frame received for 5\nI0809 23:43:51.886723 231 log.go:181] (0xc0001c8000) Data frame received for 5\nI0809 23:43:51.886758 231 log.go:181] (0xc0007f6000) (5) Data frame handling\nI0809 23:43:51.886788 231 log.go:181] (0xc0007f6000) (5) Data frame sent\nI0809 23:43:51.886803 231 log.go:181] (0xc0001c8000) Data frame received for 5\nI0809 23:43:51.886818 231 log.go:181] (0xc0007f6000) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0809 23:43:51.886847 231 log.go:181] (0xc0001c8000) Data frame received for 3\nI0809 23:43:51.886878 231 log.go:181] (0xc000830960) (3) Data frame handling\nI0809 23:43:51.889769 231 log.go:181] (0xc0001c8000) Data frame received for 1\nI0809 23:43:51.889794 231 log.go:181] (0xc000c01180) (1) Data frame handling\nI0809 23:43:51.889805 231 log.go:181] (0xc000c01180) (1) Data frame sent\nI0809 23:43:51.889816 231 log.go:181] (0xc0001c8000) (0xc000c01180) Stream removed, broadcasting: 1\nI0809 23:43:51.889974 231 log.go:181] (0xc0001c8000) Go away received\nI0809 23:43:51.890185 231 log.go:181] (0xc0001c8000) (0xc000c01180) Stream removed, broadcasting: 1\nI0809 23:43:51.890204 231 log.go:181] (0xc0001c8000) (0xc000830960) Stream removed, broadcasting: 3\nI0809 23:43:51.890213 231 log.go:181] (0xc0001c8000) (0xc0007f6000) Stream removed, broadcasting: 5\n" Aug 9 23:43:51.895: INFO: stdout: "" Aug 9 23:43:51.896: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-6233 execpod5mrkn -- /bin/sh -x -c nc -zv -t -w 2 10.102.134.40 80' Aug 9 23:43:52.129: INFO: stderr: "I0809 23:43:52.037174 249 log.go:181] (0xc000c1afd0) (0xc000b5fa40) Create stream\nI0809 23:43:52.037221 249 log.go:181] (0xc000c1afd0) (0xc000b5fa40) Stream added, broadcasting: 1\nI0809 23:43:52.041957 249 log.go:181] (0xc000c1afd0) Reply frame received for 1\nI0809 23:43:52.041998 249 log.go:181] (0xc000c1afd0) (0xc0009930e0) Create stream\nI0809 23:43:52.042023 249 log.go:181] (0xc000c1afd0) (0xc0009930e0) Stream added, broadcasting: 3\nI0809 23:43:52.043011 249 log.go:181] (0xc000c1afd0) Reply frame received for 3\nI0809 23:43:52.043044 249 log.go:181] (0xc000c1afd0) (0xc0009694a0) Create stream\nI0809 23:43:52.043052 249 log.go:181] (0xc000c1afd0) (0xc0009694a0) Stream added, broadcasting: 5\nI0809 23:43:52.043769 249 log.go:181] (0xc000c1afd0) Reply frame received for 5\nI0809 23:43:52.123320 249 log.go:181] (0xc000c1afd0) Data frame received for 3\nI0809 23:43:52.123351 249 log.go:181] (0xc0009930e0) (3) Data frame handling\nI0809 23:43:52.123405 249 log.go:181] (0xc000c1afd0) Data frame received for 5\nI0809 23:43:52.123441 249 log.go:181] (0xc0009694a0) (5) Data frame handling\nI0809 23:43:52.123463 249 log.go:181] (0xc0009694a0) (5) Data frame sent\nI0809 23:43:52.123479 249 log.go:181] (0xc000c1afd0) Data frame received for 5\nI0809 23:43:52.123493 249 log.go:181] (0xc0009694a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.134.40 80\nConnection to 10.102.134.40 80 port [tcp/http] succeeded!\nI0809 23:43:52.124789 249 log.go:181] (0xc000c1afd0) Data frame received for 1\nI0809 23:43:52.124811 249 log.go:181] (0xc000b5fa40) (1) Data frame handling\nI0809 23:43:52.124821 249 log.go:181] (0xc000b5fa40) (1) Data frame sent\nI0809 23:43:52.124849 249 log.go:181] (0xc000c1afd0) (0xc000b5fa40) Stream removed, broadcasting: 1\nI0809 23:43:52.124913 249 log.go:181] (0xc000c1afd0) Go away received\nI0809 23:43:52.125132 249 log.go:181] (0xc000c1afd0) (0xc000b5fa40) Stream removed, broadcasting: 1\nI0809 23:43:52.125146 249 log.go:181] (0xc000c1afd0) (0xc0009930e0) Stream removed, broadcasting: 3\nI0809 23:43:52.125151 249 log.go:181] (0xc000c1afd0) (0xc0009694a0) Stream removed, broadcasting: 5\n" Aug 9 23:43:52.130: INFO: stdout: "" Aug 9 23:43:52.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-6233 execpod5mrkn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 30759' Aug 9 23:43:52.348: INFO: stderr: "I0809 23:43:52.266619 267 log.go:181] (0xc0005cd1e0) (0xc000d39360) Create stream\nI0809 23:43:52.266672 267 log.go:181] (0xc0005cd1e0) (0xc000d39360) Stream added, broadcasting: 1\nI0809 23:43:52.274671 267 log.go:181] (0xc0005cd1e0) Reply frame received for 1\nI0809 23:43:52.274733 267 log.go:181] (0xc0005cd1e0) (0xc000d230e0) Create stream\nI0809 23:43:52.274818 267 log.go:181] (0xc0005cd1e0) (0xc000d230e0) Stream added, broadcasting: 3\nI0809 23:43:52.275837 267 log.go:181] (0xc0005cd1e0) Reply frame received for 3\nI0809 23:43:52.275869 267 log.go:181] (0xc0005cd1e0) (0xc000d1f2c0) Create stream\nI0809 23:43:52.275879 267 log.go:181] (0xc0005cd1e0) (0xc000d1f2c0) Stream added, broadcasting: 5\nI0809 23:43:52.277244 267 log.go:181] (0xc0005cd1e0) Reply frame received for 5\nI0809 23:43:52.342993 267 log.go:181] (0xc0005cd1e0) Data frame received for 5\nI0809 23:43:52.343040 267 log.go:181] (0xc000d1f2c0) (5) Data frame handling\nI0809 23:43:52.343063 267 log.go:181] (0xc000d1f2c0) (5) Data frame sent\nI0809 23:43:52.343081 267 log.go:181] (0xc0005cd1e0) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.14 30759\nConnection to 172.18.0.14 30759 port [tcp/30759] succeeded!\nI0809 23:43:52.343098 267 log.go:181] (0xc000d1f2c0) (5) Data frame handling\nI0809 23:43:52.343119 267 log.go:181] (0xc0005cd1e0) Data frame received for 3\nI0809 23:43:52.343132 267 log.go:181] (0xc000d230e0) (3) Data frame handling\nI0809 23:43:52.343145 267 log.go:181] (0xc0005cd1e0) Data frame received for 1\nI0809 23:43:52.343151 267 log.go:181] (0xc000d39360) (1) Data frame handling\nI0809 23:43:52.343159 267 log.go:181] (0xc000d39360) (1) Data frame sent\nI0809 23:43:52.343172 267 log.go:181] (0xc0005cd1e0) (0xc000d39360) Stream removed, broadcasting: 1\nI0809 23:43:52.343183 267 log.go:181] (0xc0005cd1e0) Go away received\nI0809 23:43:52.343619 267 log.go:181] (0xc0005cd1e0) (0xc000d39360) Stream removed, broadcasting: 1\nI0809 23:43:52.343641 267 log.go:181] (0xc0005cd1e0) (0xc000d230e0) Stream removed, broadcasting: 3\nI0809 23:43:52.343652 267 log.go:181] (0xc0005cd1e0) (0xc000d1f2c0) Stream removed, broadcasting: 5\n" Aug 9 23:43:52.349: INFO: stdout: "" Aug 9 23:43:52.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-6233 execpod5mrkn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 30759' Aug 9 23:43:52.553: INFO: stderr: "I0809 23:43:52.477171 285 log.go:181] (0xc0008aad10) (0xc000820e60) Create stream\nI0809 23:43:52.477228 285 log.go:181] (0xc0008aad10) (0xc000820e60) Stream added, broadcasting: 1\nI0809 23:43:52.482361 285 log.go:181] (0xc0008aad10) Reply frame received for 1\nI0809 23:43:52.482397 285 log.go:181] (0xc0008aad10) (0xc000996960) Create stream\nI0809 23:43:52.482408 285 log.go:181] (0xc0008aad10) (0xc000996960) Stream added, broadcasting: 3\nI0809 23:43:52.483426 285 log.go:181] (0xc0008aad10) Reply frame received for 3\nI0809 23:43:52.483464 285 log.go:181] (0xc0008aad10) (0xc0007250e0) Create stream\nI0809 23:43:52.483476 285 log.go:181] (0xc0008aad10) (0xc0007250e0) Stream added, broadcasting: 5\nI0809 23:43:52.484438 285 log.go:181] (0xc0008aad10) Reply frame received for 5\nI0809 23:43:52.545504 285 log.go:181] (0xc0008aad10) Data frame received for 5\nI0809 23:43:52.545559 285 log.go:181] (0xc0007250e0) (5) Data frame handling\nI0809 23:43:52.545571 285 log.go:181] (0xc0007250e0) (5) Data frame sent\nI0809 23:43:52.545581 285 log.go:181] (0xc0008aad10) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.12 30759\nConnection to 172.18.0.12 30759 port [tcp/30759] succeeded!\nI0809 23:43:52.545594 285 log.go:181] (0xc0008aad10) Data frame received for 3\nI0809 23:43:52.545606 285 log.go:181] (0xc000996960) (3) Data frame handling\nI0809 23:43:52.545622 285 log.go:181] (0xc0007250e0) (5) Data frame handling\nI0809 23:43:52.547357 285 log.go:181] (0xc0008aad10) Data frame received for 1\nI0809 23:43:52.547382 285 log.go:181] (0xc000820e60) (1) Data frame handling\nI0809 23:43:52.547402 285 log.go:181] (0xc000820e60) (1) Data frame sent\nI0809 23:43:52.547419 285 log.go:181] (0xc0008aad10) (0xc000820e60) Stream removed, broadcasting: 1\nI0809 23:43:52.547561 285 log.go:181] (0xc0008aad10) Go away received\nI0809 23:43:52.547819 285 log.go:181] (0xc0008aad10) (0xc000820e60) Stream removed, broadcasting: 1\nI0809 23:43:52.547841 285 log.go:181] (0xc0008aad10) (0xc000996960) Stream removed, broadcasting: 3\nI0809 23:43:52.547854 285 log.go:181] (0xc0008aad10) (0xc0007250e0) Stream removed, broadcasting: 5\n" Aug 9 23:43:52.553: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:43:52.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6233" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.126 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":75,"skipped":1125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:43:52.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:43:52.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config version' Aug 9 23:43:52.752: INFO: stderr: "" Aug 9 23:43:52.752: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20+\", GitVersion:\"v1.20.0-alpha.0.523+97c5f1f7632f2d\", GitCommit:\"97c5f1f7632f2d349303515830be76f6c1084b19\", GitTreeState:\"clean\", BuildDate:\"2020-08-07T13:25:26Z\", GoVersion:\"go1.14.7\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-rc.1\", GitCommit:\"2cbdfecbbd57dbd4e9f42d73a75fbbc6d9eadfd3\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:33:31Z\", GoVersion:\"go1.14.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:43:52.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5859" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":76,"skipped":1185,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:43:52.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Aug 9 23:43:52.818: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:44:10.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1778" for this suite. • [SLOW TEST:17.649 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":77,"skipped":1187,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:44:10.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6095 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-6095 Aug 9 23:44:10.539: INFO: Found 0 stateful pods, waiting for 1 Aug 9 23:44:20.542: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 9 23:44:20.556: INFO: Deleting all statefulset in ns statefulset-6095 Aug 9 23:44:20.629: INFO: Scaling statefulset ss to 0 Aug 9 23:44:50.694: INFO: Waiting for statefulset status.replicas updated to 0 Aug 9 23:44:50.698: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:44:50.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6095" for this suite. • [SLOW TEST:40.330 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":78,"skipped":1226,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:44:50.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:44:57.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4087" for this suite. • [SLOW TEST:7.095 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":79,"skipped":1238,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:44:57.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 9 23:44:57.934: INFO: Waiting up to 1m0s for all nodes to be ready Aug 9 23:45:57.956: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:45:57.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Aug 9 23:46:02.081: INFO: found a healthy node: latest-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:46:14.272: INFO: pods created so far: [1 1 1] Aug 9 23:46:14.272: INFO: length of pods created so far: 3 Aug 9 23:46:28.281: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:46:35.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-2179" for this suite. [AfterEach] PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:46:35.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-3914" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:97.756 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":80,"skipped":1242,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:46:35.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 9 23:46:36.278: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 9 23:46:38.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732613596, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732613596, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732613596, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732613596, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 9 23:46:41.317: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:46:42.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-460" for this suite. STEP: Destroying namespace "webhook-460-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.172 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":81,"skipped":1253,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:46:42.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:46:46.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6786" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":82,"skipped":1264,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:46:46.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Aug 9 23:46:47.272: INFO: Waiting up to 5m0s for pod "client-containers-24a344c0-6d63-46ce-a961-094f1e52f7da" in namespace "containers-5720" to be "Succeeded or Failed" Aug 9 23:46:47.275: INFO: Pod "client-containers-24a344c0-6d63-46ce-a961-094f1e52f7da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.586375ms Aug 9 23:46:49.298: INFO: Pod "client-containers-24a344c0-6d63-46ce-a961-094f1e52f7da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025735839s Aug 9 23:46:51.314: INFO: Pod "client-containers-24a344c0-6d63-46ce-a961-094f1e52f7da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041582874s STEP: Saw pod success Aug 9 23:46:51.314: INFO: Pod "client-containers-24a344c0-6d63-46ce-a961-094f1e52f7da" satisfied condition "Succeeded or Failed" Aug 9 23:46:51.317: INFO: Trying to get logs from node latest-worker2 pod client-containers-24a344c0-6d63-46ce-a961-094f1e52f7da container test-container: STEP: delete the pod Aug 9 23:46:51.442: INFO: Waiting for pod client-containers-24a344c0-6d63-46ce-a961-094f1e52f7da to disappear Aug 9 23:46:51.446: INFO: Pod client-containers-24a344c0-6d63-46ce-a961-094f1e52f7da no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:46:51.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5720" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":83,"skipped":1268,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:46:51.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:46:51.532: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 9 23:46:51.539: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:46:51.577: INFO: Number of nodes with available pods: 0 Aug 9 23:46:51.577: INFO: Node latest-worker is running more than one daemon pod Aug 9 23:46:52.582: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:46:52.587: INFO: Number of nodes with available pods: 0 Aug 9 23:46:52.587: INFO: Node latest-worker is running more than one daemon pod Aug 9 23:46:53.677: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:46:53.718: INFO: Number of nodes with available pods: 0 Aug 9 23:46:53.718: INFO: Node latest-worker is running more than one daemon pod Aug 9 23:46:54.639: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:46:54.682: INFO: Number of nodes with available pods: 0 Aug 9 23:46:54.682: INFO: Node latest-worker is running more than one daemon pod Aug 9 23:46:55.591: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:46:55.594: INFO: Number of nodes with available pods: 1 Aug 9 23:46:55.594: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:46:56.735: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:46:56.746: INFO: Number of nodes with available pods: 2 Aug 9 23:46:56.746: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 9 23:46:56.975: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:46:56.975: INFO: Wrong image for pod: daemon-set-b5vks. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:46:56.978: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:46:57.983: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:46:57.983: INFO: Wrong image for pod: daemon-set-b5vks. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:46:57.987: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:46:58.983: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:46:58.983: INFO: Wrong image for pod: daemon-set-b5vks. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:46:58.987: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:46:59.997: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:46:59.997: INFO: Wrong image for pod: daemon-set-b5vks. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:46:59.997: INFO: Pod daemon-set-b5vks is not available Aug 9 23:47:00.002: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:00.982: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:00.982: INFO: Wrong image for pod: daemon-set-b5vks. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:00.982: INFO: Pod daemon-set-b5vks is not available Aug 9 23:47:00.986: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:01.983: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:01.983: INFO: Wrong image for pod: daemon-set-b5vks. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:01.983: INFO: Pod daemon-set-b5vks is not available Aug 9 23:47:01.988: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:02.983: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:02.983: INFO: Wrong image for pod: daemon-set-b5vks. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:02.983: INFO: Pod daemon-set-b5vks is not available Aug 9 23:47:02.987: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:03.983: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:03.983: INFO: Pod daemon-set-bfwdx is not available Aug 9 23:47:03.988: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:04.987: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:04.987: INFO: Pod daemon-set-bfwdx is not available Aug 9 23:47:04.990: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:05.982: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:05.982: INFO: Pod daemon-set-bfwdx is not available Aug 9 23:47:06.009: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:06.982: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:06.986: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:07.988: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:07.988: INFO: Pod daemon-set-5xw7h is not available Aug 9 23:47:08.006: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:08.983: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:08.983: INFO: Pod daemon-set-5xw7h is not available Aug 9 23:47:08.987: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:09.986: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:09.986: INFO: Pod daemon-set-5xw7h is not available Aug 9 23:47:09.991: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:10.983: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:10.983: INFO: Pod daemon-set-5xw7h is not available Aug 9 23:47:10.986: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:11.983: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:11.983: INFO: Pod daemon-set-5xw7h is not available Aug 9 23:47:11.986: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:12.983: INFO: Wrong image for pod: daemon-set-5xw7h. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Aug 9 23:47:12.983: INFO: Pod daemon-set-5xw7h is not available Aug 9 23:47:12.988: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:14.040: INFO: Pod daemon-set-bb48w is not available Aug 9 23:47:14.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 9 23:47:14.071: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:14.074: INFO: Number of nodes with available pods: 1 Aug 9 23:47:14.075: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:47:15.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:15.083: INFO: Number of nodes with available pods: 1 Aug 9 23:47:15.083: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:47:16.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:16.084: INFO: Number of nodes with available pods: 1 Aug 9 23:47:16.084: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:47:17.080: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 9 23:47:17.084: INFO: Number of nodes with available pods: 2 Aug 9 23:47:17.084: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3989, will wait for the garbage collector to delete the pods Aug 9 23:47:17.158: INFO: Deleting DaemonSet.extensions daemon-set took: 5.841413ms Aug 9 23:47:17.659: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.253407ms Aug 9 23:47:23.862: INFO: Number of nodes with available pods: 0 Aug 9 23:47:23.862: INFO: Number of running nodes: 0, number of available pods: 0 Aug 9 23:47:23.866: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3989/daemonsets","resourceVersion":"5775537"},"items":null} Aug 9 23:47:23.868: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3989/pods","resourceVersion":"5775537"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:47:23.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3989" for this suite. • [SLOW TEST:32.477 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":84,"skipped":1276,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:47:23.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:47:23.976: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:47:25.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7672" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":85,"skipped":1281,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:47:25.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4669 STEP: creating service affinity-nodeport-transition in namespace services-4669 STEP: creating replication controller affinity-nodeport-transition in namespace services-4669 I0809 23:47:25.284357 8 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-4669, replica count: 3 I0809 23:47:28.334894 8 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0809 23:47:31.335205 8 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 9 23:47:31.345: INFO: Creating new exec pod Aug 9 23:47:36.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-4669 execpod-affinityclgvv -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Aug 9 23:47:36.620: INFO: stderr: "I0809 23:47:36.516393 321 log.go:181] (0xc00091f760) (0xc00082a960) Create stream\nI0809 23:47:36.516451 321 log.go:181] (0xc00091f760) (0xc00082a960) Stream added, broadcasting: 1\nI0809 23:47:36.519258 321 log.go:181] (0xc00091f760) Reply frame received for 1\nI0809 23:47:36.519313 321 log.go:181] (0xc00091f760) (0xc000518460) Create stream\nI0809 23:47:36.519340 321 log.go:181] (0xc00091f760) (0xc000518460) Stream added, broadcasting: 3\nI0809 23:47:36.520248 321 log.go:181] (0xc00091f760) Reply frame received for 3\nI0809 23:47:36.520278 321 log.go:181] (0xc00091f760) (0xc0005190e0) Create stream\nI0809 23:47:36.520289 321 log.go:181] (0xc00091f760) (0xc0005190e0) Stream added, broadcasting: 5\nI0809 23:47:36.521252 321 log.go:181] (0xc00091f760) Reply frame received for 5\nI0809 23:47:36.612808 321 log.go:181] (0xc00091f760) Data frame received for 5\nI0809 23:47:36.612854 321 log.go:181] (0xc0005190e0) (5) Data frame handling\nI0809 23:47:36.612872 321 log.go:181] (0xc0005190e0) (5) Data frame sent\nI0809 23:47:36.612883 321 log.go:181] (0xc00091f760) Data frame received for 5\nI0809 23:47:36.612892 321 log.go:181] (0xc0005190e0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI0809 23:47:36.612932 321 log.go:181] (0xc00091f760) Data frame received for 3\nI0809 23:47:36.612945 321 log.go:181] (0xc000518460) (3) Data frame handling\nI0809 23:47:36.614913 321 log.go:181] (0xc00091f760) Data frame received for 1\nI0809 23:47:36.614946 321 log.go:181] (0xc00082a960) (1) Data frame handling\nI0809 23:47:36.614967 321 log.go:181] (0xc00082a960) (1) Data frame sent\nI0809 23:47:36.614987 321 log.go:181] (0xc00091f760) (0xc00082a960) Stream removed, broadcasting: 1\nI0809 23:47:36.615023 321 log.go:181] (0xc00091f760) Go away received\nI0809 23:47:36.615366 321 log.go:181] (0xc00091f760) (0xc00082a960) Stream removed, broadcasting: 1\nI0809 23:47:36.615394 321 log.go:181] (0xc00091f760) (0xc000518460) Stream removed, broadcasting: 3\nI0809 23:47:36.615412 321 log.go:181] (0xc00091f760) (0xc0005190e0) Stream removed, broadcasting: 5\n" Aug 9 23:47:36.620: INFO: stdout: "" Aug 9 23:47:36.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-4669 execpod-affinityclgvv -- /bin/sh -x -c nc -zv -t -w 2 10.100.101.47 80' Aug 9 23:47:36.817: INFO: stderr: "I0809 23:47:36.744668 339 log.go:181] (0xc0009c1340) (0xc000e94320) Create stream\nI0809 23:47:36.744807 339 log.go:181] (0xc0009c1340) (0xc000e94320) Stream added, broadcasting: 1\nI0809 23:47:36.749186 339 log.go:181] (0xc0009c1340) Reply frame received for 1\nI0809 23:47:36.749234 339 log.go:181] (0xc0009c1340) (0xc000801040) Create stream\nI0809 23:47:36.749247 339 log.go:181] (0xc0009c1340) (0xc000801040) Stream added, broadcasting: 3\nI0809 23:47:36.749996 339 log.go:181] (0xc0009c1340) Reply frame received for 3\nI0809 23:47:36.750020 339 log.go:181] (0xc0009c1340) (0xc000714000) Create stream\nI0809 23:47:36.750027 339 log.go:181] (0xc0009c1340) (0xc000714000) Stream added, broadcasting: 5\nI0809 23:47:36.750790 339 log.go:181] (0xc0009c1340) Reply frame received for 5\nI0809 23:47:36.809938 339 log.go:181] (0xc0009c1340) Data frame received for 5\nI0809 23:47:36.809971 339 log.go:181] (0xc000714000) (5) Data frame handling\nI0809 23:47:36.809986 339 log.go:181] (0xc000714000) (5) Data frame sent\nI0809 23:47:36.810002 339 log.go:181] (0xc0009c1340) Data frame received for 5\nI0809 23:47:36.810018 339 log.go:181] (0xc000714000) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.101.47 80\nConnection to 10.100.101.47 80 port [tcp/http] succeeded!\nI0809 23:47:36.810042 339 log.go:181] (0xc000714000) (5) Data frame sent\nI0809 23:47:36.810322 339 log.go:181] (0xc0009c1340) Data frame received for 3\nI0809 23:47:36.810345 339 log.go:181] (0xc000801040) (3) Data frame handling\nI0809 23:47:36.810419 339 log.go:181] (0xc0009c1340) Data frame received for 5\nI0809 23:47:36.810437 339 log.go:181] (0xc000714000) (5) Data frame handling\nI0809 23:47:36.812254 339 log.go:181] (0xc0009c1340) Data frame received for 1\nI0809 23:47:36.812272 339 log.go:181] (0xc000e94320) (1) Data frame handling\nI0809 23:47:36.812280 339 log.go:181] (0xc000e94320) (1) Data frame sent\nI0809 23:47:36.812287 339 log.go:181] (0xc0009c1340) (0xc000e94320) Stream removed, broadcasting: 1\nI0809 23:47:36.812556 339 log.go:181] (0xc0009c1340) Go away received\nI0809 23:47:36.812604 339 log.go:181] (0xc0009c1340) (0xc000e94320) Stream removed, broadcasting: 1\nI0809 23:47:36.812622 339 log.go:181] (0xc0009c1340) (0xc000801040) Stream removed, broadcasting: 3\nI0809 23:47:36.812630 339 log.go:181] (0xc0009c1340) (0xc000714000) Stream removed, broadcasting: 5\n" Aug 9 23:47:36.817: INFO: stdout: "" Aug 9 23:47:36.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-4669 execpod-affinityclgvv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31415' Aug 9 23:47:37.030: INFO: stderr: "I0809 23:47:36.959433 356 log.go:181] (0xc000e8e0b0) (0xc000856d20) Create stream\nI0809 23:47:36.959488 356 log.go:181] (0xc000e8e0b0) (0xc000856d20) Stream added, broadcasting: 1\nI0809 23:47:36.961478 356 log.go:181] (0xc000e8e0b0) Reply frame received for 1\nI0809 23:47:36.961519 356 log.go:181] (0xc000e8e0b0) (0xc000a76960) Create stream\nI0809 23:47:36.961534 356 log.go:181] (0xc000e8e0b0) (0xc000a76960) Stream added, broadcasting: 3\nI0809 23:47:36.962645 356 log.go:181] (0xc000e8e0b0) Reply frame received for 3\nI0809 23:47:36.962670 356 log.go:181] (0xc000e8e0b0) (0xc000a76be0) Create stream\nI0809 23:47:36.962681 356 log.go:181] (0xc000e8e0b0) (0xc000a76be0) Stream added, broadcasting: 5\nI0809 23:47:36.963572 356 log.go:181] (0xc000e8e0b0) Reply frame received for 5\nI0809 23:47:37.025221 356 log.go:181] (0xc000e8e0b0) Data frame received for 5\nI0809 23:47:37.025253 356 log.go:181] (0xc000a76be0) (5) Data frame handling\nI0809 23:47:37.025282 356 log.go:181] (0xc000a76be0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 31415\nConnection to 172.18.0.14 31415 port [tcp/31415] succeeded!\nI0809 23:47:37.025614 356 log.go:181] (0xc000e8e0b0) Data frame received for 5\nI0809 23:47:37.025651 356 log.go:181] (0xc000a76be0) (5) Data frame handling\nI0809 23:47:37.025686 356 log.go:181] (0xc000e8e0b0) Data frame received for 3\nI0809 23:47:37.025703 356 log.go:181] (0xc000a76960) (3) Data frame handling\nI0809 23:47:37.027068 356 log.go:181] (0xc000e8e0b0) Data frame received for 1\nI0809 23:47:37.027089 356 log.go:181] (0xc000856d20) (1) Data frame handling\nI0809 23:47:37.027105 356 log.go:181] (0xc000856d20) (1) Data frame sent\nI0809 23:47:37.027121 356 log.go:181] (0xc000e8e0b0) (0xc000856d20) Stream removed, broadcasting: 1\nI0809 23:47:37.027134 356 log.go:181] (0xc000e8e0b0) Go away received\nI0809 23:47:37.027528 356 log.go:181] (0xc000e8e0b0) (0xc000856d20) Stream removed, broadcasting: 1\nI0809 23:47:37.027549 356 log.go:181] (0xc000e8e0b0) (0xc000a76960) Stream removed, broadcasting: 3\nI0809 23:47:37.027555 356 log.go:181] (0xc000e8e0b0) (0xc000a76be0) Stream removed, broadcasting: 5\n" Aug 9 23:47:37.030: INFO: stdout: "" Aug 9 23:47:37.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-4669 execpod-affinityclgvv -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31415' Aug 9 23:47:37.234: INFO: stderr: "I0809 23:47:37.157061 374 log.go:181] (0xc000752000) (0xc00098b0e0) Create stream\nI0809 23:47:37.157108 374 log.go:181] (0xc000752000) (0xc00098b0e0) Stream added, broadcasting: 1\nI0809 23:47:37.158987 374 log.go:181] (0xc000752000) Reply frame received for 1\nI0809 23:47:37.159053 374 log.go:181] (0xc000752000) (0xc00081a8c0) Create stream\nI0809 23:47:37.159073 374 log.go:181] (0xc000752000) (0xc00081a8c0) Stream added, broadcasting: 3\nI0809 23:47:37.160255 374 log.go:181] (0xc000752000) Reply frame received for 3\nI0809 23:47:37.160295 374 log.go:181] (0xc000752000) (0xc00081b220) Create stream\nI0809 23:47:37.160308 374 log.go:181] (0xc000752000) (0xc00081b220) Stream added, broadcasting: 5\nI0809 23:47:37.161704 374 log.go:181] (0xc000752000) Reply frame received for 5\nI0809 23:47:37.224330 374 log.go:181] (0xc000752000) Data frame received for 5\nI0809 23:47:37.224354 374 log.go:181] (0xc00081b220) (5) Data frame handling\nI0809 23:47:37.224365 374 log.go:181] (0xc00081b220) (5) Data frame sent\nI0809 23:47:37.224373 374 log.go:181] (0xc000752000) Data frame received for 5\nI0809 23:47:37.224379 374 log.go:181] (0xc00081b220) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 31415\nConnection to 172.18.0.12 31415 port [tcp/31415] succeeded!\nI0809 23:47:37.224395 374 log.go:181] (0xc000752000) Data frame received for 3\nI0809 23:47:37.224403 374 log.go:181] (0xc00081a8c0) (3) Data frame handling\nI0809 23:47:37.229232 374 log.go:181] (0xc000752000) Data frame received for 1\nI0809 23:47:37.229264 374 log.go:181] (0xc00098b0e0) (1) Data frame handling\nI0809 23:47:37.229284 374 log.go:181] (0xc00098b0e0) (1) Data frame sent\nI0809 23:47:37.229445 374 log.go:181] (0xc000752000) (0xc00098b0e0) Stream removed, broadcasting: 1\nI0809 23:47:37.229475 374 log.go:181] (0xc000752000) Go away received\nI0809 23:47:37.229852 374 log.go:181] (0xc000752000) (0xc00098b0e0) Stream removed, broadcasting: 1\nI0809 23:47:37.229873 374 log.go:181] (0xc000752000) (0xc00081a8c0) Stream removed, broadcasting: 3\nI0809 23:47:37.229883 374 log.go:181] (0xc000752000) (0xc00081b220) Stream removed, broadcasting: 5\n" Aug 9 23:47:37.234: INFO: stdout: "" Aug 9 23:47:37.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-4669 execpod-affinityclgvv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:31415/ ; done' Aug 9 23:47:37.590: INFO: stderr: "I0809 23:47:37.413998 391 log.go:181] (0xc0008634a0) (0xc00063ba40) Create stream\nI0809 23:47:37.414055 391 log.go:181] (0xc0008634a0) (0xc00063ba40) Stream added, broadcasting: 1\nI0809 23:47:37.415967 391 log.go:181] (0xc0008634a0) Reply frame received for 1\nI0809 23:47:37.416002 391 log.go:181] (0xc0008634a0) (0xc000688780) Create stream\nI0809 23:47:37.416013 391 log.go:181] (0xc0008634a0) (0xc000688780) Stream added, broadcasting: 3\nI0809 23:47:37.416542 391 log.go:181] (0xc0008634a0) Reply frame received for 3\nI0809 23:47:37.416574 391 log.go:181] (0xc0008634a0) (0xc000435f40) Create stream\nI0809 23:47:37.416587 391 log.go:181] (0xc0008634a0) (0xc000435f40) Stream added, broadcasting: 5\nI0809 23:47:37.417180 391 log.go:181] (0xc0008634a0) Reply frame received for 5\nI0809 23:47:37.477903 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.477956 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.477984 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.478028 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.478052 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.478070 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.486200 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.486261 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.486293 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.487124 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.487159 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.487201 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.487502 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.487524 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.487557 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.494493 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.494507 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.494513 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.495198 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.495223 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.495233 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.495253 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.495282 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.495311 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.499643 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.499671 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.499700 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.500231 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.500265 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.500277 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.500298 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.500312 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.500323 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.505231 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.505244 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.505250 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.505755 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.505766 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.505772 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.505797 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.505817 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.505834 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.512299 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.512314 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.512323 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.513043 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.513056 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.513069 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0809 23:47:37.513200 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.513209 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.513215 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.513247 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.513283 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.513312 391 log.go:181] (0xc000435f40) (5) Data frame sent\n http://172.18.0.14:31415/\nI0809 23:47:37.519863 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.519876 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.519883 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.520422 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.520441 391 log.go:181] (0xc000435f40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.520459 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.520490 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.520506 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.520523 391 log.go:181] (0xc000435f40) (5) Data frame sent\nI0809 23:47:37.526130 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.526153 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.526169 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.526781 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.526803 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.526836 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.526850 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.526861 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.526868 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.533118 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.533137 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.533152 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.533908 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.533934 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.533945 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.533956 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.533964 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.533974 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.538489 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.538520 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.538539 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.539365 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.539398 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.539414 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.539438 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.539485 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.539515 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.545267 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.545289 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.545304 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.545837 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.545886 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.545902 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.545919 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.545931 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.545940 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.551477 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.551519 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.551543 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.552026 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.552048 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.552064 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.552089 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.552108 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.552128 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.558641 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.558672 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.558690 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.559380 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.559399 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.559413 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.559562 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.559578 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.559588 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.563915 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.563928 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.563938 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.564405 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.564427 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.564436 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.564448 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.564455 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.564460 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.571480 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.571492 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.571498 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.571987 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.572003 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.572009 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.572016 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.572021 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.572026 391 log.go:181] (0xc000435f40) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.576398 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.576410 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.576422 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.576998 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.577016 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.577024 391 log.go:181] (0xc000435f40) (5) Data frame sent\nI0809 23:47:37.577031 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.577035 391 log.go:181] (0xc000435f40) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.577046 391 log.go:181] (0xc000435f40) (5) Data frame sent\nI0809 23:47:37.577341 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.577352 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.577357 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.581512 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.581527 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.581535 391 log.go:181] (0xc000688780) (3) Data frame sent\nI0809 23:47:37.582011 391 log.go:181] (0xc0008634a0) Data frame received for 5\nI0809 23:47:37.582027 391 log.go:181] (0xc000435f40) (5) Data frame handling\nI0809 23:47:37.582103 391 log.go:181] (0xc0008634a0) Data frame received for 3\nI0809 23:47:37.582114 391 log.go:181] (0xc000688780) (3) Data frame handling\nI0809 23:47:37.584251 391 log.go:181] (0xc0008634a0) Data frame received for 1\nI0809 23:47:37.584263 391 log.go:181] (0xc00063ba40) (1) Data frame handling\nI0809 23:47:37.584269 391 log.go:181] (0xc00063ba40) (1) Data frame sent\nI0809 23:47:37.584605 391 log.go:181] (0xc0008634a0) (0xc00063ba40) Stream removed, broadcasting: 1\nI0809 23:47:37.584839 391 log.go:181] (0xc0008634a0) Go away received\nI0809 23:47:37.585320 391 log.go:181] (0xc0008634a0) (0xc00063ba40) Stream removed, broadcasting: 1\nI0809 23:47:37.585345 391 log.go:181] (0xc0008634a0) (0xc000688780) Stream removed, broadcasting: 3\nI0809 23:47:37.585357 391 log.go:181] (0xc0008634a0) (0xc000435f40) Stream removed, broadcasting: 5\n" Aug 9 23:47:37.591: INFO: stdout: "\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-z5m97\naffinity-nodeport-transition-z5m97\naffinity-nodeport-transition-z5m97\naffinity-nodeport-transition-8827k\naffinity-nodeport-transition-8827k\naffinity-nodeport-transition-z5m97\naffinity-nodeport-transition-8827k\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-z5m97\naffinity-nodeport-transition-8827k\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-8827k\naffinity-nodeport-transition-z5m97\naffinity-nodeport-transition-z5m97\naffinity-nodeport-transition-8827k" Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-z5m97 Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-z5m97 Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-z5m97 Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-8827k Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-8827k Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-z5m97 Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-8827k Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-z5m97 Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-8827k Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-8827k Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-z5m97 Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-z5m97 Aug 9 23:47:37.591: INFO: Received response from host: affinity-nodeport-transition-8827k Aug 9 23:47:37.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-4669 execpod-affinityclgvv -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:31415/ ; done' Aug 9 23:47:37.972: INFO: stderr: "I0809 23:47:37.768450 402 log.go:181] (0xc000a3ef20) (0xc000dc48c0) Create stream\nI0809 23:47:37.768506 402 log.go:181] (0xc000a3ef20) (0xc000dc48c0) Stream added, broadcasting: 1\nI0809 23:47:37.772054 402 log.go:181] (0xc000a3ef20) Reply frame received for 1\nI0809 23:47:37.772092 402 log.go:181] (0xc000a3ef20) (0xc000aff0e0) Create stream\nI0809 23:47:37.772102 402 log.go:181] (0xc000a3ef20) (0xc000aff0e0) Stream added, broadcasting: 3\nI0809 23:47:37.772995 402 log.go:181] (0xc000a3ef20) Reply frame received for 3\nI0809 23:47:37.773019 402 log.go:181] (0xc000a3ef20) (0xc000690aa0) Create stream\nI0809 23:47:37.773026 402 log.go:181] (0xc000a3ef20) (0xc000690aa0) Stream added, broadcasting: 5\nI0809 23:47:37.773830 402 log.go:181] (0xc000a3ef20) Reply frame received for 5\nI0809 23:47:37.855380 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.855427 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.855444 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.855469 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.855480 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.855498 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.858375 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.858411 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.858433 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.859308 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.859336 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.859363 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.859380 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.859399 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.859411 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.865966 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.865986 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.866008 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.866303 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.866327 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.866355 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.866367 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.866393 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.866433 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.873165 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.873199 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.873230 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.873831 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.873849 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.873859 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.873889 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.873903 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.873916 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.879999 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.880025 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.880051 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.881099 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.881112 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.881118 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.881137 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.881155 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.881169 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.886614 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.886635 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.886650 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.887173 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.887185 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.887191 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -sI0809 23:47:37.887211 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.887242 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.887257 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.887271 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.887279 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.887289 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.894472 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.894493 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.894511 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.895169 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.895187 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.895194 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.895224 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.895243 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.895279 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.902935 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.902954 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.902989 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.903490 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.903507 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.903517 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.903529 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.903536 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.903542 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.909374 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.909397 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.909409 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.909849 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.909867 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.909879 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.909894 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.909902 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.909911 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.915822 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.915851 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.915876 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.916315 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.916336 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.916345 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.916360 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.916368 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.916378 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.923212 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.923233 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.923249 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.923794 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.923824 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.923861 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.923894 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.923917 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.923937 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.928133 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.928147 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.928154 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.928795 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.928817 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.928829 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.928923 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.928951 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.928982 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.933000 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.933013 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.933021 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.933898 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.933925 402 log.go:181] (0xc000690aa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.933943 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.933981 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.934006 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.934029 402 log.go:181] (0xc000690aa0) (5) Data frame sent\nI0809 23:47:37.939274 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.939296 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.939316 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.940134 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.940154 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.940175 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.940230 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.940255 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.940288 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.946489 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.946511 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.946522 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.947054 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.947077 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.947092 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.947110 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.947142 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.947160 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.953328 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.953347 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.953381 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.953876 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.953900 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.953930 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.953941 402 log.go:181] (0xc000690aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31415/\nI0809 23:47:37.953953 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.953968 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.961465 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.961490 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.961510 402 log.go:181] (0xc000aff0e0) (3) Data frame sent\nI0809 23:47:37.962670 402 log.go:181] (0xc000a3ef20) Data frame received for 5\nI0809 23:47:37.962691 402 log.go:181] (0xc000690aa0) (5) Data frame handling\nI0809 23:47:37.962763 402 log.go:181] (0xc000a3ef20) Data frame received for 3\nI0809 23:47:37.962783 402 log.go:181] (0xc000aff0e0) (3) Data frame handling\nI0809 23:47:37.965263 402 log.go:181] (0xc000a3ef20) Data frame received for 1\nI0809 23:47:37.965296 402 log.go:181] (0xc000dc48c0) (1) Data frame handling\nI0809 23:47:37.965316 402 log.go:181] (0xc000dc48c0) (1) Data frame sent\nI0809 23:47:37.965343 402 log.go:181] (0xc000a3ef20) (0xc000dc48c0) Stream removed, broadcasting: 1\nI0809 23:47:37.965364 402 log.go:181] (0xc000a3ef20) Go away received\nI0809 23:47:37.965780 402 log.go:181] (0xc000a3ef20) (0xc000dc48c0) Stream removed, broadcasting: 1\nI0809 23:47:37.965802 402 log.go:181] (0xc000a3ef20) (0xc000aff0e0) Stream removed, broadcasting: 3\nI0809 23:47:37.965819 402 log.go:181] (0xc000a3ef20) (0xc000690aa0) Stream removed, broadcasting: 5\n" Aug 9 23:47:37.972: INFO: stdout: "\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk\naffinity-nodeport-transition-2n7xk" Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Received response from host: affinity-nodeport-transition-2n7xk Aug 9 23:47:37.973: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-4669, will wait for the garbage collector to delete the pods Aug 9 23:47:38.077: INFO: Deleting ReplicationController affinity-nodeport-transition took: 14.859573ms Aug 9 23:47:38.578: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.25766ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:47:53.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4669" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:28.809 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":86,"skipped":1308,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:47:53.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-9ac5c10c-1c8b-4ade-a0da-d5e85aeaf35e STEP: Creating secret with name s-test-opt-upd-bea1a35b-c455-4757-b189-1967b1ebf772 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9ac5c10c-1c8b-4ade-a0da-d5e85aeaf35e STEP: Updating secret s-test-opt-upd-bea1a35b-c455-4757-b189-1967b1ebf772 STEP: Creating secret with name s-test-opt-create-72a535f4-322b-453f-bbeb-973a04420c35 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:49:28.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1811" for this suite. • [SLOW TEST:94.657 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":87,"skipped":1317,"failed":0} S ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:49:28.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Aug 9 23:49:28.725: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Aug 9 23:49:28.725: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8356' Aug 9 23:49:29.105: INFO: stderr: "" Aug 9 23:49:29.105: INFO: stdout: "service/agnhost-replica created\n" Aug 9 23:49:29.105: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Aug 9 23:49:29.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8356' Aug 9 23:49:29.453: INFO: stderr: "" Aug 9 23:49:29.454: INFO: stdout: "service/agnhost-primary created\n" Aug 9 23:49:29.454: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Aug 9 23:49:29.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8356' Aug 9 23:49:29.805: INFO: stderr: "" Aug 9 23:49:29.805: INFO: stdout: "service/frontend created\n" Aug 9 23:49:29.806: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Aug 9 23:49:29.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8356' Aug 9 23:49:30.148: INFO: stderr: "" Aug 9 23:49:30.148: INFO: stdout: "deployment.apps/frontend created\n" Aug 9 23:49:30.148: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 9 23:49:30.148: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8356' Aug 9 23:49:30.661: INFO: stderr: "" Aug 9 23:49:30.661: INFO: stdout: "deployment.apps/agnhost-primary created\n" Aug 9 23:49:30.662: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Aug 9 23:49:30.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8356' Aug 9 23:49:31.000: INFO: stderr: "" Aug 9 23:49:31.000: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Aug 9 23:49:31.000: INFO: Waiting for all frontend pods to be Running. Aug 9 23:49:41.051: INFO: Waiting for frontend to serve content. Aug 9 23:49:41.061: INFO: Trying to add a new entry to the guestbook. Aug 9 23:49:41.071: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Aug 9 23:49:41.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8356' Aug 9 23:49:41.216: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 9 23:49:41.217: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Aug 9 23:49:41.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8356' Aug 9 23:49:41.354: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 9 23:49:41.354: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Aug 9 23:49:41.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8356' Aug 9 23:49:41.550: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 9 23:49:41.550: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 9 23:49:41.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8356' Aug 9 23:49:41.669: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 9 23:49:41.669: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Aug 9 23:49:41.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8356' Aug 9 23:49:41.806: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 9 23:49:41.806: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Aug 9 23:49:41.806: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8356' Aug 9 23:49:42.408: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 9 23:49:42.408: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:49:42.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8356" for this suite. • [SLOW TEST:13.844 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":88,"skipped":1318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:49:42.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:50:02.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4291" for this suite. • [SLOW TEST:20.462 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":89,"skipped":1352,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:50:02.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:50:07.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5121" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":90,"skipped":1384,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:50:07.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-q8zw STEP: Creating a pod to test atomic-volume-subpath Aug 9 23:50:07.168: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-q8zw" in namespace "subpath-848" to be "Succeeded or Failed" Aug 9 23:50:07.188: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Pending", Reason="", readiness=false. Elapsed: 19.546358ms Aug 9 23:50:09.193: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024746266s Aug 9 23:50:11.198: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Running", Reason="", readiness=true. Elapsed: 4.029806908s Aug 9 23:50:13.202: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Running", Reason="", readiness=true. Elapsed: 6.034044264s Aug 9 23:50:15.206: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Running", Reason="", readiness=true. Elapsed: 8.037934373s Aug 9 23:50:17.211: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Running", Reason="", readiness=true. Elapsed: 10.042305705s Aug 9 23:50:19.215: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Running", Reason="", readiness=true. Elapsed: 12.046552065s Aug 9 23:50:21.221: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Running", Reason="", readiness=true. Elapsed: 14.052590945s Aug 9 23:50:23.225: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Running", Reason="", readiness=true. Elapsed: 16.056659246s Aug 9 23:50:25.230: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Running", Reason="", readiness=true. Elapsed: 18.061448913s Aug 9 23:50:27.234: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Running", Reason="", readiness=true. Elapsed: 20.065568761s Aug 9 23:50:29.238: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Running", Reason="", readiness=true. Elapsed: 22.069572972s Aug 9 23:50:31.242: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Running", Reason="", readiness=true. Elapsed: 24.073687134s Aug 9 23:50:33.246: INFO: Pod "pod-subpath-test-configmap-q8zw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.077969704s STEP: Saw pod success Aug 9 23:50:33.246: INFO: Pod "pod-subpath-test-configmap-q8zw" satisfied condition "Succeeded or Failed" Aug 9 23:50:33.250: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-q8zw container test-container-subpath-configmap-q8zw: STEP: delete the pod Aug 9 23:50:33.285: INFO: Waiting for pod pod-subpath-test-configmap-q8zw to disappear Aug 9 23:50:33.292: INFO: Pod pod-subpath-test-configmap-q8zw no longer exists STEP: Deleting pod pod-subpath-test-configmap-q8zw Aug 9 23:50:33.292: INFO: Deleting pod "pod-subpath-test-configmap-q8zw" in namespace "subpath-848" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:50:33.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-848" for this suite. • [SLOW TEST:26.258 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":91,"skipped":1394,"failed":0} S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:50:33.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:50:33.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4518" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":92,"skipped":1395,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:50:33.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 9 23:50:33.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35836421-267b-4cec-b9c9-d38895d6e139" in namespace "projected-7345" to be "Succeeded or Failed" Aug 9 23:50:33.548: INFO: Pod "downwardapi-volume-35836421-267b-4cec-b9c9-d38895d6e139": Phase="Pending", Reason="", readiness=false. Elapsed: 31.924035ms Aug 9 23:50:35.552: INFO: Pod "downwardapi-volume-35836421-267b-4cec-b9c9-d38895d6e139": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035805003s Aug 9 23:50:37.555: INFO: Pod "downwardapi-volume-35836421-267b-4cec-b9c9-d38895d6e139": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039503258s STEP: Saw pod success Aug 9 23:50:37.556: INFO: Pod "downwardapi-volume-35836421-267b-4cec-b9c9-d38895d6e139" satisfied condition "Succeeded or Failed" Aug 9 23:50:37.558: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-35836421-267b-4cec-b9c9-d38895d6e139 container client-container: STEP: delete the pod Aug 9 23:50:37.613: INFO: Waiting for pod downwardapi-volume-35836421-267b-4cec-b9c9-d38895d6e139 to disappear Aug 9 23:50:37.622: INFO: Pod downwardapi-volume-35836421-267b-4cec-b9c9-d38895d6e139 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:50:37.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7345" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":93,"skipped":1405,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:50:37.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 9 23:50:37.684: INFO: Waiting up to 5m0s for pod "pod-a945496f-5a45-4304-9aee-6e2f7c54f509" in namespace "emptydir-4040" to be "Succeeded or Failed" Aug 9 23:50:37.697: INFO: Pod "pod-a945496f-5a45-4304-9aee-6e2f7c54f509": Phase="Pending", Reason="", readiness=false. Elapsed: 12.32369ms Aug 9 23:50:39.701: INFO: Pod "pod-a945496f-5a45-4304-9aee-6e2f7c54f509": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016661285s Aug 9 23:50:41.706: INFO: Pod "pod-a945496f-5a45-4304-9aee-6e2f7c54f509": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020992392s STEP: Saw pod success Aug 9 23:50:41.706: INFO: Pod "pod-a945496f-5a45-4304-9aee-6e2f7c54f509" satisfied condition "Succeeded or Failed" Aug 9 23:50:41.709: INFO: Trying to get logs from node latest-worker2 pod pod-a945496f-5a45-4304-9aee-6e2f7c54f509 container test-container: STEP: delete the pod Aug 9 23:50:41.772: INFO: Waiting for pod pod-a945496f-5a45-4304-9aee-6e2f7c54f509 to disappear Aug 9 23:50:41.826: INFO: Pod pod-a945496f-5a45-4304-9aee-6e2f7c54f509 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:50:41.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4040" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":94,"skipped":1406,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:50:41.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 9 23:50:41.915: INFO: Waiting up to 5m0s for pod "pod-2b9de065-e98f-40f4-8fd1-0b7ef33db43f" in namespace "emptydir-3992" to be "Succeeded or Failed" Aug 9 23:50:41.952: INFO: Pod "pod-2b9de065-e98f-40f4-8fd1-0b7ef33db43f": Phase="Pending", Reason="", readiness=false. Elapsed: 37.001831ms Aug 9 23:50:43.957: INFO: Pod "pod-2b9de065-e98f-40f4-8fd1-0b7ef33db43f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04172331s Aug 9 23:50:45.962: INFO: Pod "pod-2b9de065-e98f-40f4-8fd1-0b7ef33db43f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04638335s STEP: Saw pod success Aug 9 23:50:45.962: INFO: Pod "pod-2b9de065-e98f-40f4-8fd1-0b7ef33db43f" satisfied condition "Succeeded or Failed" Aug 9 23:50:45.965: INFO: Trying to get logs from node latest-worker2 pod pod-2b9de065-e98f-40f4-8fd1-0b7ef33db43f container test-container: STEP: delete the pod Aug 9 23:50:45.982: INFO: Waiting for pod pod-2b9de065-e98f-40f4-8fd1-0b7ef33db43f to disappear Aug 9 23:50:45.999: INFO: Pod pod-2b9de065-e98f-40f4-8fd1-0b7ef33db43f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:50:45.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3992" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":95,"skipped":1415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:50:46.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:50:46.100: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 9 23:50:46.131: INFO: Number of nodes with available pods: 0 Aug 9 23:50:46.131: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 9 23:50:46.179: INFO: Number of nodes with available pods: 0 Aug 9 23:50:46.179: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:50:47.184: INFO: Number of nodes with available pods: 0 Aug 9 23:50:47.184: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:50:48.184: INFO: Number of nodes with available pods: 0 Aug 9 23:50:48.184: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:50:49.183: INFO: Number of nodes with available pods: 0 Aug 9 23:50:49.183: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:50:50.182: INFO: Number of nodes with available pods: 1 Aug 9 23:50:50.182: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 9 23:50:50.270: INFO: Number of nodes with available pods: 1 Aug 9 23:50:50.270: INFO: Number of running nodes: 0, number of available pods: 1 Aug 9 23:50:51.273: INFO: Number of nodes with available pods: 0 Aug 9 23:50:51.273: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 9 23:50:51.328: INFO: Number of nodes with available pods: 0 Aug 9 23:50:51.328: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:50:52.332: INFO: Number of nodes with available pods: 0 Aug 9 23:50:52.332: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:50:53.360: INFO: Number of nodes with available pods: 0 Aug 9 23:50:53.360: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:50:54.332: INFO: Number of nodes with available pods: 0 Aug 9 23:50:54.332: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:50:55.333: INFO: Number of nodes with available pods: 0 Aug 9 23:50:55.333: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:50:56.333: INFO: Number of nodes with available pods: 0 Aug 9 23:50:56.333: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:50:57.333: INFO: Number of nodes with available pods: 0 Aug 9 23:50:57.333: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:50:58.333: INFO: Number of nodes with available pods: 0 Aug 9 23:50:58.333: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:50:59.333: INFO: Number of nodes with available pods: 0 Aug 9 23:50:59.333: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:51:00.332: INFO: Number of nodes with available pods: 0 Aug 9 23:51:00.332: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:51:01.333: INFO: Number of nodes with available pods: 0 Aug 9 23:51:01.333: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:51:02.332: INFO: Number of nodes with available pods: 0 Aug 9 23:51:02.332: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:51:03.343: INFO: Number of nodes with available pods: 0 Aug 9 23:51:03.344: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:51:04.333: INFO: Number of nodes with available pods: 0 Aug 9 23:51:04.333: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:51:05.372: INFO: Number of nodes with available pods: 0 Aug 9 23:51:05.373: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:51:06.332: INFO: Number of nodes with available pods: 0 Aug 9 23:51:06.332: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:51:07.333: INFO: Number of nodes with available pods: 0 Aug 9 23:51:07.333: INFO: Node latest-worker2 is running more than one daemon pod Aug 9 23:51:08.333: INFO: Number of nodes with available pods: 1 Aug 9 23:51:08.333: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1419, will wait for the garbage collector to delete the pods Aug 9 23:51:08.394: INFO: Deleting DaemonSet.extensions daemon-set took: 5.131081ms Aug 9 23:51:08.794: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.150636ms Aug 9 23:51:13.911: INFO: Number of nodes with available pods: 0 Aug 9 23:51:13.911: INFO: Number of running nodes: 0, number of available pods: 0 Aug 9 23:51:13.913: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1419/daemonsets","resourceVersion":"5776917"},"items":null} Aug 9 23:51:13.915: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1419/pods","resourceVersion":"5776917"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:51:13.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1419" for this suite. • [SLOW TEST:27.951 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":96,"skipped":1440,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:51:13.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:53:14.062: INFO: Deleting pod "var-expansion-90d99cd2-3066-4310-92eb-4ae9fbedb6cb" in namespace "var-expansion-1764" Aug 9 23:53:14.066: INFO: Wait up to 5m0s for pod "var-expansion-90d99cd2-3066-4310-92eb-4ae9fbedb6cb" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:53:18.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1764" for this suite. • [SLOW TEST:124.142 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":97,"skipped":1461,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:53:18.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1798 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1798 STEP: Creating statefulset with conflicting port in namespace statefulset-1798 STEP: Waiting until pod test-pod will start running in namespace statefulset-1798 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1798 Aug 9 23:53:24.303: INFO: Observed stateful pod in namespace: statefulset-1798, name: ss-0, uid: 167a8a11-7a7a-425f-afac-a37eb64436bc, status phase: Pending. Waiting for statefulset controller to delete. Aug 9 23:53:24.856: INFO: Observed stateful pod in namespace: statefulset-1798, name: ss-0, uid: 167a8a11-7a7a-425f-afac-a37eb64436bc, status phase: Failed. Waiting for statefulset controller to delete. Aug 9 23:53:24.871: INFO: Observed stateful pod in namespace: statefulset-1798, name: ss-0, uid: 167a8a11-7a7a-425f-afac-a37eb64436bc, status phase: Failed. Waiting for statefulset controller to delete. Aug 9 23:53:24.888: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1798 STEP: Removing pod with conflicting port in namespace statefulset-1798 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1798 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 9 23:53:29.078: INFO: Deleting all statefulset in ns statefulset-1798 Aug 9 23:53:29.081: INFO: Scaling statefulset ss to 0 Aug 9 23:53:49.102: INFO: Waiting for statefulset status.replicas updated to 0 Aug 9 23:53:49.106: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:53:49.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1798" for this suite. • [SLOW TEST:31.047 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":98,"skipped":1476,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:53:49.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0809 23:54:01.792100 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 9 23:55:03.867: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Aug 9 23:55:03.867: INFO: Deleting pod "simpletest-rc-to-be-deleted-2grlf" in namespace "gc-9104" Aug 9 23:55:03.893: INFO: Deleting pod "simpletest-rc-to-be-deleted-57wxd" in namespace "gc-9104" Aug 9 23:55:04.075: INFO: Deleting pod "simpletest-rc-to-be-deleted-dqvd2" in namespace "gc-9104" Aug 9 23:55:04.479: INFO: Deleting pod "simpletest-rc-to-be-deleted-f2chr" in namespace "gc-9104" Aug 9 23:55:04.851: INFO: Deleting pod "simpletest-rc-to-be-deleted-fqwts" in namespace "gc-9104" [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:55:05.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9104" for this suite. • [SLOW TEST:76.188 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":99,"skipped":1477,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:55:05.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 9 23:55:05.801: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6424 /api/v1/namespaces/watch-6424/configmaps/e2e-watch-test-configmap-a 28ec4ea8-b65c-4de8-aef2-0e440816655b 5777956 0 2020-08-09 23:55:05 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-09 23:55:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 9 23:55:05.801: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6424 /api/v1/namespaces/watch-6424/configmaps/e2e-watch-test-configmap-a 28ec4ea8-b65c-4de8-aef2-0e440816655b 5777956 0 2020-08-09 23:55:05 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-09 23:55:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 9 23:55:15.810: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6424 /api/v1/namespaces/watch-6424/configmaps/e2e-watch-test-configmap-a 28ec4ea8-b65c-4de8-aef2-0e440816655b 5778046 0 2020-08-09 23:55:05 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-09 23:55:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 9 23:55:15.811: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6424 /api/v1/namespaces/watch-6424/configmaps/e2e-watch-test-configmap-a 28ec4ea8-b65c-4de8-aef2-0e440816655b 5778046 0 2020-08-09 23:55:05 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-09 23:55:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 9 23:55:25.819: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6424 /api/v1/namespaces/watch-6424/configmaps/e2e-watch-test-configmap-a 28ec4ea8-b65c-4de8-aef2-0e440816655b 5778076 0 2020-08-09 23:55:05 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-09 23:55:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 9 23:55:25.820: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6424 /api/v1/namespaces/watch-6424/configmaps/e2e-watch-test-configmap-a 28ec4ea8-b65c-4de8-aef2-0e440816655b 5778076 0 2020-08-09 23:55:05 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-09 23:55:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 9 23:55:35.827: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6424 /api/v1/namespaces/watch-6424/configmaps/e2e-watch-test-configmap-a 28ec4ea8-b65c-4de8-aef2-0e440816655b 5778106 0 2020-08-09 23:55:05 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-09 23:55:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 9 23:55:35.827: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6424 /api/v1/namespaces/watch-6424/configmaps/e2e-watch-test-configmap-a 28ec4ea8-b65c-4de8-aef2-0e440816655b 5778106 0 2020-08-09 23:55:05 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-09 23:55:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 9 23:55:45.836: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6424 /api/v1/namespaces/watch-6424/configmaps/e2e-watch-test-configmap-b 6c5bbb6a-6e0b-4512-b8f7-f6642c1a718a 5778136 0 2020-08-09 23:55:45 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-09 23:55:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 9 23:55:45.836: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6424 /api/v1/namespaces/watch-6424/configmaps/e2e-watch-test-configmap-b 6c5bbb6a-6e0b-4512-b8f7-f6642c1a718a 5778136 0 2020-08-09 23:55:45 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-09 23:55:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 9 23:55:55.844: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6424 /api/v1/namespaces/watch-6424/configmaps/e2e-watch-test-configmap-b 6c5bbb6a-6e0b-4512-b8f7-f6642c1a718a 5778164 0 2020-08-09 23:55:45 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-09 23:55:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 9 23:55:55.844: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6424 /api/v1/namespaces/watch-6424/configmaps/e2e-watch-test-configmap-b 6c5bbb6a-6e0b-4512-b8f7-f6642c1a718a 5778164 0 2020-08-09 23:55:45 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-09 23:55:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:56:05.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6424" for this suite. • [SLOW TEST:60.518 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":100,"skipped":1490,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:56:05.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 9 23:56:05.979: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:56:10.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5013" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":101,"skipped":1495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:56:10.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 9 23:56:10.250: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9212 /api/v1/namespaces/watch-9212/configmaps/e2e-watch-test-watch-closed 70a51c1c-8d74-421a-94ad-152fb29f5aae 5778225 0 2020-08-09 23:56:10 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-09 23:56:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 9 23:56:10.251: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9212 /api/v1/namespaces/watch-9212/configmaps/e2e-watch-test-watch-closed 70a51c1c-8d74-421a-94ad-152fb29f5aae 5778226 0 2020-08-09 23:56:10 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-09 23:56:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 9 23:56:10.284: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9212 /api/v1/namespaces/watch-9212/configmaps/e2e-watch-test-watch-closed 70a51c1c-8d74-421a-94ad-152fb29f5aae 5778227 0 2020-08-09 23:56:10 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-09 23:56:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 9 23:56:10.285: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-9212 /api/v1/namespaces/watch-9212/configmaps/e2e-watch-test-watch-closed 70a51c1c-8d74-421a-94ad-152fb29f5aae 5778228 0 2020-08-09 23:56:10 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-09 23:56:10 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:56:10.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9212" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":102,"skipped":1528,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:56:10.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 9 23:56:10.374: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b12dd47f-c064-4495-b282-f914010c4221" in namespace "downward-api-4272" to be "Succeeded or Failed" Aug 9 23:56:10.406: INFO: Pod "downwardapi-volume-b12dd47f-c064-4495-b282-f914010c4221": Phase="Pending", Reason="", readiness=false. Elapsed: 32.489859ms Aug 9 23:56:12.410: INFO: Pod "downwardapi-volume-b12dd47f-c064-4495-b282-f914010c4221": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036171269s Aug 9 23:56:14.418: INFO: Pod "downwardapi-volume-b12dd47f-c064-4495-b282-f914010c4221": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044804716s STEP: Saw pod success Aug 9 23:56:14.418: INFO: Pod "downwardapi-volume-b12dd47f-c064-4495-b282-f914010c4221" satisfied condition "Succeeded or Failed" Aug 9 23:56:14.421: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b12dd47f-c064-4495-b282-f914010c4221 container client-container: STEP: delete the pod Aug 9 23:56:14.463: INFO: Waiting for pod downwardapi-volume-b12dd47f-c064-4495-b282-f914010c4221 to disappear Aug 9 23:56:14.490: INFO: Pod downwardapi-volume-b12dd47f-c064-4495-b282-f914010c4221 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:56:14.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4272" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":103,"skipped":1528,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:56:14.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 9 23:56:14.634: INFO: >>> kubeConfig: /root/.kube/config Aug 9 23:56:16.635: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:56:28.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7415" for this suite. • [SLOW TEST:14.091 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":104,"skipped":1600,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:56:28.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-b7ff1a03-e51d-441f-b407-0596339c01ea STEP: Creating a pod to test consume secrets Aug 9 23:56:28.752: INFO: Waiting up to 5m0s for pod "pod-secrets-e804e257-6f25-407d-b48c-ecb89c74da72" in namespace "secrets-7112" to be "Succeeded or Failed" Aug 9 23:56:28.755: INFO: Pod "pod-secrets-e804e257-6f25-407d-b48c-ecb89c74da72": Phase="Pending", Reason="", readiness=false. Elapsed: 3.16759ms Aug 9 23:56:30.893: INFO: Pod "pod-secrets-e804e257-6f25-407d-b48c-ecb89c74da72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140364866s Aug 9 23:56:32.896: INFO: Pod "pod-secrets-e804e257-6f25-407d-b48c-ecb89c74da72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.144238278s STEP: Saw pod success Aug 9 23:56:32.896: INFO: Pod "pod-secrets-e804e257-6f25-407d-b48c-ecb89c74da72" satisfied condition "Succeeded or Failed" Aug 9 23:56:32.899: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-e804e257-6f25-407d-b48c-ecb89c74da72 container secret-volume-test: STEP: delete the pod Aug 9 23:56:32.919: INFO: Waiting for pod pod-secrets-e804e257-6f25-407d-b48c-ecb89c74da72 to disappear Aug 9 23:56:32.923: INFO: Pod pod-secrets-e804e257-6f25-407d-b48c-ecb89c74da72 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:56:32.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7112" for this suite. STEP: Destroying namespace "secret-namespace-1471" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":105,"skipped":1611,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:56:32.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-203b0262-08ab-49dc-a772-edcb41fb7c09 STEP: Creating configMap with name cm-test-opt-upd-b6cd13e2-0923-4ac1-bed0-db9b77d204fe STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-203b0262-08ab-49dc-a772-edcb41fb7c09 STEP: Updating configmap cm-test-opt-upd-b6cd13e2-0923-4ac1-bed0-db9b77d204fe STEP: Creating configMap with name cm-test-opt-create-f9d5c2d8-b33c-44b8-ab5c-4c2c7d51e89d STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:56:43.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6179" for this suite. • [SLOW TEST:10.209 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":106,"skipped":1620,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:56:43.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 9 23:56:43.219: INFO: Waiting up to 5m0s for pod "pod-d6f11293-e992-453e-a508-9b9c0c854260" in namespace "emptydir-3116" to be "Succeeded or Failed" Aug 9 23:56:43.223: INFO: Pod "pod-d6f11293-e992-453e-a508-9b9c0c854260": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101991ms Aug 9 23:56:45.227: INFO: Pod "pod-d6f11293-e992-453e-a508-9b9c0c854260": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008494852s Aug 9 23:56:47.232: INFO: Pod "pod-d6f11293-e992-453e-a508-9b9c0c854260": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012718732s STEP: Saw pod success Aug 9 23:56:47.232: INFO: Pod "pod-d6f11293-e992-453e-a508-9b9c0c854260" satisfied condition "Succeeded or Failed" Aug 9 23:56:47.234: INFO: Trying to get logs from node latest-worker2 pod pod-d6f11293-e992-453e-a508-9b9c0c854260 container test-container: STEP: delete the pod Aug 9 23:56:47.287: INFO: Waiting for pod pod-d6f11293-e992-453e-a508-9b9c0c854260 to disappear Aug 9 23:56:47.301: INFO: Pod pod-d6f11293-e992-453e-a508-9b9c0c854260 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 9 23:56:47.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3116" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":107,"skipped":1627,"failed":0} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 9 23:56:47.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 9 23:56:47.399: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 9 23:56:47.407: INFO: Waiting for terminating namespaces to be deleted... Aug 9 23:56:47.410: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 9 23:56:47.415: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 9 23:56:47.415: INFO: Container coredns ready: true, restart count 0 Aug 9 23:56:47.415: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Aug 9 23:56:47.415: INFO: Container coredns ready: true, restart count 0 Aug 9 23:56:47.415: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 9 23:56:47.415: INFO: Container kindnet-cni ready: true, restart count 0 Aug 9 23:56:47.415: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 9 23:56:47.415: INFO: Container kube-proxy ready: true, restart count 0 Aug 9 23:56:47.415: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 9 23:56:47.415: INFO: Container local-path-provisioner ready: true, restart count 0 Aug 9 23:56:47.415: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 9 23:56:47.420: INFO: pod-configmaps-c1086f31-dc84-4914-860e-8428f1ca97fc from configmap-6179 started at 2020-08-09 23:56:33 +0000 UTC (3 container statuses recorded) Aug 9 23:56:47.420: INFO: Container createcm-volume-test ready: true, restart count 0 Aug 9 23:56:47.420: INFO: Container delcm-volume-test ready: true, restart count 0 Aug 9 23:56:47.420: INFO: Container updcm-volume-test ready: true, restart count 0 Aug 9 23:56:47.420: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 9 23:56:47.420: INFO: Container kindnet-cni ready: true, restart count 0 Aug 9 23:56:47.420: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 9 23:56:47.420: INFO: Container kube-proxy ready: true, restart count 0 Aug 9 23:56:47.420: INFO: pod-exec-websocket-26aea69c-07f9-4d87-9539-14226c013203 from pods-5013 started at 2020-08-09 23:56:06 +0000 UTC (1 container statuses recorded) Aug 9 23:56:47.420: INFO: Container main ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8f9016bc-f28b-4f68-b0be-5c4f507deb95 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-8f9016bc-f28b-4f68-b0be-5c4f507deb95 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-8f9016bc-f28b-4f68-b0be-5c4f507deb95 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:01:57.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3901" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:310.330 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":108,"skipped":1636,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:01:57.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5554 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Aug 10 00:01:57.811: INFO: Found 0 stateful pods, waiting for 3 Aug 10 00:02:07.816: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:02:07.816: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:02:07.816: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 10 00:02:17.817: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:02:17.817: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:02:17.817: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:02:17.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5554 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 00:02:21.039: INFO: stderr: "I0810 00:02:20.896637 628 log.go:181] (0xc000e34000) (0xc000444960) Create stream\nI0810 00:02:20.896822 628 log.go:181] (0xc000e34000) (0xc000444960) Stream added, broadcasting: 1\nI0810 00:02:20.898943 628 log.go:181] (0xc000e34000) Reply frame received for 1\nI0810 00:02:20.898989 628 log.go:181] (0xc000e34000) (0xc000376000) Create stream\nI0810 00:02:20.899005 628 log.go:181] (0xc000e34000) (0xc000376000) Stream added, broadcasting: 3\nI0810 00:02:20.899914 628 log.go:181] (0xc000e34000) Reply frame received for 3\nI0810 00:02:20.899954 628 log.go:181] (0xc000e34000) (0xc000376960) Create stream\nI0810 00:02:20.899969 628 log.go:181] (0xc000e34000) (0xc000376960) Stream added, broadcasting: 5\nI0810 00:02:20.901237 628 log.go:181] (0xc000e34000) Reply frame received for 5\nI0810 00:02:20.991099 628 log.go:181] (0xc000e34000) Data frame received for 5\nI0810 00:02:20.991197 628 log.go:181] (0xc000376960) (5) Data frame handling\nI0810 00:02:20.991245 628 log.go:181] (0xc000376960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 00:02:21.032482 628 log.go:181] (0xc000e34000) Data frame received for 3\nI0810 00:02:21.032502 628 log.go:181] (0xc000376000) (3) Data frame handling\nI0810 00:02:21.032512 628 log.go:181] (0xc000376000) (3) Data frame sent\nI0810 00:02:21.032640 628 log.go:181] (0xc000e34000) Data frame received for 3\nI0810 00:02:21.032653 628 log.go:181] (0xc000376000) (3) Data frame handling\nI0810 00:02:21.033179 628 log.go:181] (0xc000e34000) Data frame received for 5\nI0810 00:02:21.033203 628 log.go:181] (0xc000376960) (5) Data frame handling\nI0810 00:02:21.034670 628 log.go:181] (0xc000e34000) Data frame received for 1\nI0810 00:02:21.034690 628 log.go:181] (0xc000444960) (1) Data frame handling\nI0810 00:02:21.034712 628 log.go:181] (0xc000444960) (1) Data frame sent\nI0810 00:02:21.034733 628 log.go:181] (0xc000e34000) (0xc000444960) Stream removed, broadcasting: 1\nI0810 00:02:21.034751 628 log.go:181] (0xc000e34000) Go away received\nI0810 00:02:21.035110 628 log.go:181] (0xc000e34000) (0xc000444960) Stream removed, broadcasting: 1\nI0810 00:02:21.035125 628 log.go:181] (0xc000e34000) (0xc000376000) Stream removed, broadcasting: 3\nI0810 00:02:21.035133 628 log.go:181] (0xc000e34000) (0xc000376960) Stream removed, broadcasting: 5\n" Aug 10 00:02:21.039: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 00:02:21.039: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 10 00:02:31.069: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 10 00:02:41.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5554 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 00:02:41.329: INFO: stderr: "I0810 00:02:41.262951 646 log.go:181] (0xc000b20000) (0xc000d210e0) Create stream\nI0810 00:02:41.263009 646 log.go:181] (0xc000b20000) (0xc000d210e0) Stream added, broadcasting: 1\nI0810 00:02:41.264533 646 log.go:181] (0xc000b20000) Reply frame received for 1\nI0810 00:02:41.264572 646 log.go:181] (0xc000b20000) (0xc000b048c0) Create stream\nI0810 00:02:41.264589 646 log.go:181] (0xc000b20000) (0xc000b048c0) Stream added, broadcasting: 3\nI0810 00:02:41.265493 646 log.go:181] (0xc000b20000) Reply frame received for 3\nI0810 00:02:41.265525 646 log.go:181] (0xc000b20000) (0xc000d2ae60) Create stream\nI0810 00:02:41.265536 646 log.go:181] (0xc000b20000) (0xc000d2ae60) Stream added, broadcasting: 5\nI0810 00:02:41.266254 646 log.go:181] (0xc000b20000) Reply frame received for 5\nI0810 00:02:41.322492 646 log.go:181] (0xc000b20000) Data frame received for 5\nI0810 00:02:41.322537 646 log.go:181] (0xc000d2ae60) (5) Data frame handling\nI0810 00:02:41.322552 646 log.go:181] (0xc000d2ae60) (5) Data frame sent\nI0810 00:02:41.322561 646 log.go:181] (0xc000b20000) Data frame received for 5\nI0810 00:02:41.322569 646 log.go:181] (0xc000d2ae60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0810 00:02:41.322594 646 log.go:181] (0xc000b20000) Data frame received for 3\nI0810 00:02:41.322608 646 log.go:181] (0xc000b048c0) (3) Data frame handling\nI0810 00:02:41.322630 646 log.go:181] (0xc000b048c0) (3) Data frame sent\nI0810 00:02:41.322648 646 log.go:181] (0xc000b20000) Data frame received for 3\nI0810 00:02:41.322658 646 log.go:181] (0xc000b048c0) (3) Data frame handling\nI0810 00:02:41.323684 646 log.go:181] (0xc000b20000) Data frame received for 1\nI0810 00:02:41.323769 646 log.go:181] (0xc000d210e0) (1) Data frame handling\nI0810 00:02:41.323824 646 log.go:181] (0xc000d210e0) (1) Data frame sent\nI0810 00:02:41.323850 646 log.go:181] (0xc000b20000) (0xc000d210e0) Stream removed, broadcasting: 1\nI0810 00:02:41.323881 646 log.go:181] (0xc000b20000) Go away received\nI0810 00:02:41.324303 646 log.go:181] (0xc000b20000) (0xc000d210e0) Stream removed, broadcasting: 1\nI0810 00:02:41.324337 646 log.go:181] (0xc000b20000) (0xc000b048c0) Stream removed, broadcasting: 3\nI0810 00:02:41.324350 646 log.go:181] (0xc000b20000) (0xc000d2ae60) Stream removed, broadcasting: 5\n" Aug 10 00:02:41.329: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 10 00:02:41.329: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 10 00:03:01.349: INFO: Waiting for StatefulSet statefulset-5554/ss2 to complete update Aug 10 00:03:01.350: INFO: Waiting for Pod statefulset-5554/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Aug 10 00:03:11.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5554 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 00:03:11.628: INFO: stderr: "I0810 00:03:11.486981 664 log.go:181] (0xc000141550) (0xc000ae9ea0) Create stream\nI0810 00:03:11.487043 664 log.go:181] (0xc000141550) (0xc000ae9ea0) Stream added, broadcasting: 1\nI0810 00:03:11.488705 664 log.go:181] (0xc000141550) Reply frame received for 1\nI0810 00:03:11.488880 664 log.go:181] (0xc000141550) (0xc000ad85a0) Create stream\nI0810 00:03:11.488905 664 log.go:181] (0xc000141550) (0xc000ad85a0) Stream added, broadcasting: 3\nI0810 00:03:11.489883 664 log.go:181] (0xc000141550) Reply frame received for 3\nI0810 00:03:11.489919 664 log.go:181] (0xc000141550) (0xc00099e140) Create stream\nI0810 00:03:11.489932 664 log.go:181] (0xc000141550) (0xc00099e140) Stream added, broadcasting: 5\nI0810 00:03:11.490722 664 log.go:181] (0xc000141550) Reply frame received for 5\nI0810 00:03:11.575185 664 log.go:181] (0xc000141550) Data frame received for 5\nI0810 00:03:11.575218 664 log.go:181] (0xc00099e140) (5) Data frame handling\nI0810 00:03:11.575238 664 log.go:181] (0xc00099e140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 00:03:11.615938 664 log.go:181] (0xc000141550) Data frame received for 3\nI0810 00:03:11.616011 664 log.go:181] (0xc000ad85a0) (3) Data frame handling\nI0810 00:03:11.616035 664 log.go:181] (0xc000ad85a0) (3) Data frame sent\nI0810 00:03:11.616055 664 log.go:181] (0xc000141550) Data frame received for 3\nI0810 00:03:11.616090 664 log.go:181] (0xc000141550) Data frame received for 5\nI0810 00:03:11.616104 664 log.go:181] (0xc00099e140) (5) Data frame handling\nI0810 00:03:11.616120 664 log.go:181] (0xc000ad85a0) (3) Data frame handling\nI0810 00:03:11.618490 664 log.go:181] (0xc000141550) Data frame received for 1\nI0810 00:03:11.618512 664 log.go:181] (0xc000ae9ea0) (1) Data frame handling\nI0810 00:03:11.618531 664 log.go:181] (0xc000ae9ea0) (1) Data frame sent\nI0810 00:03:11.618546 664 log.go:181] (0xc000141550) (0xc000ae9ea0) Stream removed, broadcasting: 1\nI0810 00:03:11.618999 664 log.go:181] (0xc000141550) (0xc000ae9ea0) Stream removed, broadcasting: 1\nI0810 00:03:11.619024 664 log.go:181] (0xc000141550) (0xc000ad85a0) Stream removed, broadcasting: 3\nI0810 00:03:11.619034 664 log.go:181] (0xc000141550) (0xc00099e140) Stream removed, broadcasting: 5\n" Aug 10 00:03:11.629: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 00:03:11.629: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 10 00:03:21.664: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 10 00:03:31.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5554 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 00:03:31.946: INFO: stderr: "I0810 00:03:31.866256 682 log.go:181] (0xc000e4d080) (0xc000ec2460) Create stream\nI0810 00:03:31.866308 682 log.go:181] (0xc000e4d080) (0xc000ec2460) Stream added, broadcasting: 1\nI0810 00:03:31.870983 682 log.go:181] (0xc000e4d080) Reply frame received for 1\nI0810 00:03:31.871042 682 log.go:181] (0xc000e4d080) (0xc000bb5180) Create stream\nI0810 00:03:31.871058 682 log.go:181] (0xc000e4d080) (0xc000bb5180) Stream added, broadcasting: 3\nI0810 00:03:31.872045 682 log.go:181] (0xc000e4d080) Reply frame received for 3\nI0810 00:03:31.872075 682 log.go:181] (0xc000e4d080) (0xc000bae460) Create stream\nI0810 00:03:31.872083 682 log.go:181] (0xc000e4d080) (0xc000bae460) Stream added, broadcasting: 5\nI0810 00:03:31.873105 682 log.go:181] (0xc000e4d080) Reply frame received for 5\nI0810 00:03:31.939319 682 log.go:181] (0xc000e4d080) Data frame received for 3\nI0810 00:03:31.939349 682 log.go:181] (0xc000bb5180) (3) Data frame handling\nI0810 00:03:31.939357 682 log.go:181] (0xc000bb5180) (3) Data frame sent\nI0810 00:03:31.939363 682 log.go:181] (0xc000e4d080) Data frame received for 3\nI0810 00:03:31.939367 682 log.go:181] (0xc000bb5180) (3) Data frame handling\nI0810 00:03:31.939382 682 log.go:181] (0xc000e4d080) Data frame received for 5\nI0810 00:03:31.939406 682 log.go:181] (0xc000bae460) (5) Data frame handling\nI0810 00:03:31.939422 682 log.go:181] (0xc000bae460) (5) Data frame sent\nI0810 00:03:31.939433 682 log.go:181] (0xc000e4d080) Data frame received for 5\nI0810 00:03:31.939442 682 log.go:181] (0xc000bae460) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0810 00:03:31.941761 682 log.go:181] (0xc000e4d080) Data frame received for 1\nI0810 00:03:31.941783 682 log.go:181] (0xc000ec2460) (1) Data frame handling\nI0810 00:03:31.941807 682 log.go:181] (0xc000ec2460) (1) Data frame sent\nI0810 00:03:31.941821 682 log.go:181] (0xc000e4d080) (0xc000ec2460) Stream removed, broadcasting: 1\nI0810 00:03:31.941832 682 log.go:181] (0xc000e4d080) Go away received\nI0810 00:03:31.942115 682 log.go:181] (0xc000e4d080) (0xc000ec2460) Stream removed, broadcasting: 1\nI0810 00:03:31.942127 682 log.go:181] (0xc000e4d080) (0xc000bb5180) Stream removed, broadcasting: 3\nI0810 00:03:31.942133 682 log.go:181] (0xc000e4d080) (0xc000bae460) Stream removed, broadcasting: 5\n" Aug 10 00:03:31.946: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 10 00:03:31.946: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 10 00:03:41.968: INFO: Waiting for StatefulSet statefulset-5554/ss2 to complete update Aug 10 00:03:41.968: INFO: Waiting for Pod statefulset-5554/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 10 00:03:41.968: INFO: Waiting for Pod statefulset-5554/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 10 00:03:41.968: INFO: Waiting for Pod statefulset-5554/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 10 00:03:51.975: INFO: Waiting for StatefulSet statefulset-5554/ss2 to complete update Aug 10 00:03:51.975: INFO: Waiting for Pod statefulset-5554/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 10 00:03:51.975: INFO: Waiting for Pod statefulset-5554/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Aug 10 00:04:01.975: INFO: Waiting for StatefulSet statefulset-5554/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 10 00:04:11.977: INFO: Deleting all statefulset in ns statefulset-5554 Aug 10 00:04:11.980: INFO: Scaling statefulset ss2 to 0 Aug 10 00:04:42.002: INFO: Waiting for statefulset status.replicas updated to 0 Aug 10 00:04:42.006: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:04:42.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5554" for this suite. • [SLOW TEST:164.407 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":109,"skipped":1639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:04:42.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2208 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 10 00:04:42.127: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 10 00:04:42.195: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 00:04:44.667: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 00:04:46.204: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 00:04:48.198: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:04:50.199: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:04:52.199: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:04:54.199: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:04:56.199: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:04:58.200: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:05:00.199: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:05:02.199: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:05:04.199: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:05:06.198: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 10 00:05:06.203: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 10 00:05:12.291: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.69:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2208 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:05:12.291: INFO: >>> kubeConfig: /root/.kube/config I0810 00:05:12.328540 8 log.go:181] (0xc001798420) (0xc002da3720) Create stream I0810 00:05:12.328573 8 log.go:181] (0xc001798420) (0xc002da3720) Stream added, broadcasting: 1 I0810 00:05:12.331114 8 log.go:181] (0xc001798420) Reply frame received for 1 I0810 00:05:12.331173 8 log.go:181] (0xc001798420) (0xc002114f00) Create stream I0810 00:05:12.331195 8 log.go:181] (0xc001798420) (0xc002114f00) Stream added, broadcasting: 3 I0810 00:05:12.332202 8 log.go:181] (0xc001798420) Reply frame received for 3 I0810 00:05:12.332244 8 log.go:181] (0xc001798420) (0xc0038437c0) Create stream I0810 00:05:12.332259 8 log.go:181] (0xc001798420) (0xc0038437c0) Stream added, broadcasting: 5 I0810 00:05:12.333360 8 log.go:181] (0xc001798420) Reply frame received for 5 I0810 00:05:12.421693 8 log.go:181] (0xc001798420) Data frame received for 3 I0810 00:05:12.421753 8 log.go:181] (0xc002114f00) (3) Data frame handling I0810 00:05:12.421792 8 log.go:181] (0xc002114f00) (3) Data frame sent I0810 00:05:12.422193 8 log.go:181] (0xc001798420) Data frame received for 3 I0810 00:05:12.422216 8 log.go:181] (0xc002114f00) (3) Data frame handling I0810 00:05:12.422251 8 log.go:181] (0xc001798420) Data frame received for 5 I0810 00:05:12.422276 8 log.go:181] (0xc0038437c0) (5) Data frame handling I0810 00:05:12.424067 8 log.go:181] (0xc001798420) Data frame received for 1 I0810 00:05:12.424098 8 log.go:181] (0xc002da3720) (1) Data frame handling I0810 00:05:12.424123 8 log.go:181] (0xc002da3720) (1) Data frame sent I0810 00:05:12.424148 8 log.go:181] (0xc001798420) (0xc002da3720) Stream removed, broadcasting: 1 I0810 00:05:12.424170 8 log.go:181] (0xc001798420) Go away received I0810 00:05:12.424321 8 log.go:181] (0xc001798420) (0xc002da3720) Stream removed, broadcasting: 1 I0810 00:05:12.424345 8 log.go:181] (0xc001798420) (0xc002114f00) Stream removed, broadcasting: 3 I0810 00:05:12.424371 8 log.go:181] (0xc001798420) (0xc0038437c0) Stream removed, broadcasting: 5 Aug 10 00:05:12.424: INFO: Found all expected endpoints: [netserver-0] Aug 10 00:05:12.427: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.180:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2208 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:05:12.427: INFO: >>> kubeConfig: /root/.kube/config I0810 00:05:12.450655 8 log.go:181] (0xc000e1e6e0) (0xc000db7860) Create stream I0810 00:05:12.450683 8 log.go:181] (0xc000e1e6e0) (0xc000db7860) Stream added, broadcasting: 1 I0810 00:05:12.453211 8 log.go:181] (0xc000e1e6e0) Reply frame received for 1 I0810 00:05:12.453246 8 log.go:181] (0xc000e1e6e0) (0xc001a34000) Create stream I0810 00:05:12.453257 8 log.go:181] (0xc000e1e6e0) (0xc001a34000) Stream added, broadcasting: 3 I0810 00:05:12.454152 8 log.go:181] (0xc000e1e6e0) Reply frame received for 3 I0810 00:05:12.454187 8 log.go:181] (0xc000e1e6e0) (0xc002da37c0) Create stream I0810 00:05:12.454200 8 log.go:181] (0xc000e1e6e0) (0xc002da37c0) Stream added, broadcasting: 5 I0810 00:05:12.454953 8 log.go:181] (0xc000e1e6e0) Reply frame received for 5 I0810 00:05:12.529733 8 log.go:181] (0xc000e1e6e0) Data frame received for 5 I0810 00:05:12.529768 8 log.go:181] (0xc002da37c0) (5) Data frame handling I0810 00:05:12.529793 8 log.go:181] (0xc000e1e6e0) Data frame received for 3 I0810 00:05:12.529814 8 log.go:181] (0xc001a34000) (3) Data frame handling I0810 00:05:12.529832 8 log.go:181] (0xc001a34000) (3) Data frame sent I0810 00:05:12.529844 8 log.go:181] (0xc000e1e6e0) Data frame received for 3 I0810 00:05:12.529856 8 log.go:181] (0xc001a34000) (3) Data frame handling I0810 00:05:12.531080 8 log.go:181] (0xc000e1e6e0) Data frame received for 1 I0810 00:05:12.531102 8 log.go:181] (0xc000db7860) (1) Data frame handling I0810 00:05:12.531112 8 log.go:181] (0xc000db7860) (1) Data frame sent I0810 00:05:12.531130 8 log.go:181] (0xc000e1e6e0) (0xc000db7860) Stream removed, broadcasting: 1 I0810 00:05:12.531162 8 log.go:181] (0xc000e1e6e0) Go away received I0810 00:05:12.531294 8 log.go:181] (0xc000e1e6e0) (0xc000db7860) Stream removed, broadcasting: 1 I0810 00:05:12.531309 8 log.go:181] (0xc000e1e6e0) (0xc001a34000) Stream removed, broadcasting: 3 I0810 00:05:12.531317 8 log.go:181] (0xc000e1e6e0) (0xc002da37c0) Stream removed, broadcasting: 5 Aug 10 00:05:12.531: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:05:12.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2208" for this suite. • [SLOW TEST:30.492 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":110,"skipped":1688,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:05:12.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:05:12.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Aug 10 00:05:13.951: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-10T00:05:13Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-10T00:05:13Z]] name:name1 resourceVersion:5780496 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a3cd5685-7530-4dc7-b65c-7f5ff22ff3e4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Aug 10 00:05:23.958: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-10T00:05:23Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-10T00:05:23Z]] name:name2 resourceVersion:5780554 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:765cbabd-8e21-4506-84bd-2502976d9e87] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Aug 10 00:05:33.966: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-10T00:05:13Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-10T00:05:33Z]] name:name1 resourceVersion:5780584 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a3cd5685-7530-4dc7-b65c-7f5ff22ff3e4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Aug 10 00:05:43.974: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-10T00:05:23Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-10T00:05:43Z]] name:name2 resourceVersion:5780614 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:765cbabd-8e21-4506-84bd-2502976d9e87] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Aug 10 00:05:53.983: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-10T00:05:13Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-10T00:05:33Z]] name:name1 resourceVersion:5780644 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a3cd5685-7530-4dc7-b65c-7f5ff22ff3e4] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Aug 10 00:06:03.991: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-10T00:05:23Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-10T00:05:43Z]] name:name2 resourceVersion:5780674 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:765cbabd-8e21-4506-84bd-2502976d9e87] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:06:14.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-3018" for this suite. • [SLOW TEST:61.971 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":111,"skipped":1696,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:06:14.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 10 00:06:22.665: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 10 00:06:22.673: INFO: Pod pod-with-poststart-exec-hook still exists Aug 10 00:06:24.674: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 10 00:06:24.678: INFO: Pod pod-with-poststart-exec-hook still exists Aug 10 00:06:26.674: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 10 00:06:26.679: INFO: Pod pod-with-poststart-exec-hook still exists Aug 10 00:06:28.674: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 10 00:06:28.678: INFO: Pod pod-with-poststart-exec-hook still exists Aug 10 00:06:30.674: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 10 00:06:30.678: INFO: Pod pod-with-poststart-exec-hook still exists Aug 10 00:06:32.674: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 10 00:06:32.683: INFO: Pod pod-with-poststart-exec-hook still exists Aug 10 00:06:34.674: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 10 00:06:34.678: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:06:34.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5350" for this suite. • [SLOW TEST:20.176 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":112,"skipped":1707,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:06:34.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:06:35.322: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:06:37.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732614795, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732614795, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732614795, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732614795, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:06:40.370: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:06:40.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1694" for this suite. STEP: Destroying namespace "webhook-1694-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.946 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":113,"skipped":1716,"failed":0} [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:06:40.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 10 00:06:40.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7123' Aug 10 00:06:41.091: INFO: stderr: "" Aug 10 00:06:41.091: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Aug 10 00:06:41.091: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-7123' Aug 10 00:06:41.230: INFO: stderr: "" Aug 10 00:06:41.230: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-10T00:06:41Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-10T00:06:41Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7123\",\n \"resourceVersion\": \"5780889\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7123/pods/e2e-test-httpd-pod\",\n \"uid\": \"57efc0f1-aed1-40ea-8ecf-367ef6eddede\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-rnp6t\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-rnp6t\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-rnp6t\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-10T00:06:41Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\"\n }\n}\n" Aug 10 00:06:41.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-7123' Aug 10 00:06:41.552: INFO: stderr: "W0810 00:06:41.295165 736 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Aug 10 00:06:41.552: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Aug 10 00:06:41.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7123' Aug 10 00:06:53.870: INFO: stderr: "" Aug 10 00:06:53.870: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:06:53.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7123" for this suite. • [SLOW TEST:13.263 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:919 should check if kubectl can dry-run update Pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":114,"skipped":1716,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:06:53.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9215 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 10 00:06:53.942: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 10 00:06:54.035: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 00:06:56.039: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 00:06:58.040: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:07:00.040: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:07:02.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:07:04.040: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:07:06.040: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:07:08.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:07:10.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:07:12.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:07:14.039: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:07:16.043: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 10 00:07:16.047: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 10 00:07:18.052: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 10 00:07:24.144: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.70 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9215 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:07:24.144: INFO: >>> kubeConfig: /root/.kube/config I0810 00:07:24.175327 8 log.go:181] (0xc000e1eb00) (0xc002838b40) Create stream I0810 00:07:24.175360 8 log.go:181] (0xc000e1eb00) (0xc002838b40) Stream added, broadcasting: 1 I0810 00:07:24.177420 8 log.go:181] (0xc000e1eb00) Reply frame received for 1 I0810 00:07:24.177453 8 log.go:181] (0xc000e1eb00) (0xc00076eaa0) Create stream I0810 00:07:24.177461 8 log.go:181] (0xc000e1eb00) (0xc00076eaa0) Stream added, broadcasting: 3 I0810 00:07:24.178625 8 log.go:181] (0xc000e1eb00) Reply frame received for 3 I0810 00:07:24.178658 8 log.go:181] (0xc000e1eb00) (0xc00264c500) Create stream I0810 00:07:24.178669 8 log.go:181] (0xc000e1eb00) (0xc00264c500) Stream added, broadcasting: 5 I0810 00:07:24.179654 8 log.go:181] (0xc000e1eb00) Reply frame received for 5 I0810 00:07:25.258034 8 log.go:181] (0xc000e1eb00) Data frame received for 3 I0810 00:07:25.258082 8 log.go:181] (0xc00076eaa0) (3) Data frame handling I0810 00:07:25.258099 8 log.go:181] (0xc00076eaa0) (3) Data frame sent I0810 00:07:25.258115 8 log.go:181] (0xc000e1eb00) Data frame received for 3 I0810 00:07:25.258132 8 log.go:181] (0xc00076eaa0) (3) Data frame handling I0810 00:07:25.258153 8 log.go:181] (0xc000e1eb00) Data frame received for 5 I0810 00:07:25.258181 8 log.go:181] (0xc00264c500) (5) Data frame handling I0810 00:07:25.260178 8 log.go:181] (0xc000e1eb00) Data frame received for 1 I0810 00:07:25.260217 8 log.go:181] (0xc002838b40) (1) Data frame handling I0810 00:07:25.260260 8 log.go:181] (0xc002838b40) (1) Data frame sent I0810 00:07:25.260438 8 log.go:181] (0xc000e1eb00) (0xc002838b40) Stream removed, broadcasting: 1 I0810 00:07:25.260510 8 log.go:181] (0xc000e1eb00) Go away received I0810 00:07:25.260588 8 log.go:181] (0xc000e1eb00) (0xc002838b40) Stream removed, broadcasting: 1 I0810 00:07:25.260620 8 log.go:181] (0xc000e1eb00) (0xc00076eaa0) Stream removed, broadcasting: 3 I0810 00:07:25.260660 8 log.go:181] (0xc000e1eb00) (0xc00264c500) Stream removed, broadcasting: 5 Aug 10 00:07:25.260: INFO: Found all expected endpoints: [netserver-0] Aug 10 00:07:25.264: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.186 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9215 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:07:25.264: INFO: >>> kubeConfig: /root/.kube/config I0810 00:07:25.296125 8 log.go:181] (0xc000e1f130) (0xc002838fa0) Create stream I0810 00:07:25.296157 8 log.go:181] (0xc000e1f130) (0xc002838fa0) Stream added, broadcasting: 1 I0810 00:07:25.297677 8 log.go:181] (0xc000e1f130) Reply frame received for 1 I0810 00:07:25.297704 8 log.go:181] (0xc000e1f130) (0xc000917680) Create stream I0810 00:07:25.297714 8 log.go:181] (0xc000e1f130) (0xc000917680) Stream added, broadcasting: 3 I0810 00:07:25.298410 8 log.go:181] (0xc000e1f130) Reply frame received for 3 I0810 00:07:25.298447 8 log.go:181] (0xc000e1f130) (0xc000917900) Create stream I0810 00:07:25.298463 8 log.go:181] (0xc000e1f130) (0xc000917900) Stream added, broadcasting: 5 I0810 00:07:25.299190 8 log.go:181] (0xc000e1f130) Reply frame received for 5 I0810 00:07:26.383445 8 log.go:181] (0xc000e1f130) Data frame received for 3 I0810 00:07:26.383494 8 log.go:181] (0xc000917680) (3) Data frame handling I0810 00:07:26.383538 8 log.go:181] (0xc000917680) (3) Data frame sent I0810 00:07:26.383825 8 log.go:181] (0xc000e1f130) Data frame received for 3 I0810 00:07:26.383907 8 log.go:181] (0xc000917680) (3) Data frame handling I0810 00:07:26.384009 8 log.go:181] (0xc000e1f130) Data frame received for 5 I0810 00:07:26.384057 8 log.go:181] (0xc000917900) (5) Data frame handling I0810 00:07:26.385887 8 log.go:181] (0xc000e1f130) Data frame received for 1 I0810 00:07:26.385966 8 log.go:181] (0xc002838fa0) (1) Data frame handling I0810 00:07:26.386037 8 log.go:181] (0xc002838fa0) (1) Data frame sent I0810 00:07:26.386075 8 log.go:181] (0xc000e1f130) (0xc002838fa0) Stream removed, broadcasting: 1 I0810 00:07:26.386187 8 log.go:181] (0xc000e1f130) Go away received I0810 00:07:26.386318 8 log.go:181] (0xc000e1f130) (0xc002838fa0) Stream removed, broadcasting: 1 I0810 00:07:26.386355 8 log.go:181] (0xc000e1f130) (0xc000917680) Stream removed, broadcasting: 3 I0810 00:07:26.386376 8 log.go:181] (0xc000e1f130) (0xc000917900) Stream removed, broadcasting: 5 Aug 10 00:07:26.386: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:07:26.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9215" for this suite. • [SLOW TEST:32.496 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":115,"skipped":1780,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:07:26.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:08:26.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3869" for this suite. • [SLOW TEST:60.096 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":116,"skipped":1803,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:08:26.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Aug 10 00:08:26.585: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Aug 10 00:08:37.649: INFO: >>> kubeConfig: /root/.kube/config Aug 10 00:08:40.653: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:08:52.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3074" for this suite. • [SLOW TEST:25.729 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":117,"skipped":1815,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:08:52.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 00:08:52.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c73f914b-7c8b-4e4d-94e1-9defc8c0e3d0" in namespace "downward-api-1058" to be "Succeeded or Failed" Aug 10 00:08:52.316: INFO: Pod "downwardapi-volume-c73f914b-7c8b-4e4d-94e1-9defc8c0e3d0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.369042ms Aug 10 00:08:54.319: INFO: Pod "downwardapi-volume-c73f914b-7c8b-4e4d-94e1-9defc8c0e3d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020591017s Aug 10 00:08:56.323: INFO: Pod "downwardapi-volume-c73f914b-7c8b-4e4d-94e1-9defc8c0e3d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024216329s STEP: Saw pod success Aug 10 00:08:56.323: INFO: Pod "downwardapi-volume-c73f914b-7c8b-4e4d-94e1-9defc8c0e3d0" satisfied condition "Succeeded or Failed" Aug 10 00:08:56.325: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-c73f914b-7c8b-4e4d-94e1-9defc8c0e3d0 container client-container: STEP: delete the pod Aug 10 00:08:56.398: INFO: Waiting for pod downwardapi-volume-c73f914b-7c8b-4e4d-94e1-9defc8c0e3d0 to disappear Aug 10 00:08:56.407: INFO: Pod downwardapi-volume-c73f914b-7c8b-4e4d-94e1-9defc8c0e3d0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:08:56.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1058" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":118,"skipped":1821,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:08:56.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:08:56.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1062" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":119,"skipped":1825,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:08:56.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Aug 10 00:08:56.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8910' Aug 10 00:08:57.002: INFO: stderr: "" Aug 10 00:08:57.002: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 10 00:08:57.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8910' Aug 10 00:08:57.151: INFO: stderr: "" Aug 10 00:08:57.151: INFO: stdout: "update-demo-nautilus-prr6j update-demo-nautilus-vwzcd " Aug 10 00:08:57.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prr6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8910' Aug 10 00:08:57.272: INFO: stderr: "" Aug 10 00:08:57.272: INFO: stdout: "" Aug 10 00:08:57.272: INFO: update-demo-nautilus-prr6j is created but not running Aug 10 00:09:02.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8910' Aug 10 00:09:02.383: INFO: stderr: "" Aug 10 00:09:02.383: INFO: stdout: "update-demo-nautilus-prr6j update-demo-nautilus-vwzcd " Aug 10 00:09:02.383: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prr6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8910' Aug 10 00:09:02.482: INFO: stderr: "" Aug 10 00:09:02.482: INFO: stdout: "true" Aug 10 00:09:02.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-prr6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8910' Aug 10 00:09:02.582: INFO: stderr: "" Aug 10 00:09:02.582: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 10 00:09:02.582: INFO: validating pod update-demo-nautilus-prr6j Aug 10 00:09:02.586: INFO: got data: { "image": "nautilus.jpg" } Aug 10 00:09:02.586: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 10 00:09:02.586: INFO: update-demo-nautilus-prr6j is verified up and running Aug 10 00:09:02.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vwzcd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8910' Aug 10 00:09:02.682: INFO: stderr: "" Aug 10 00:09:02.682: INFO: stdout: "true" Aug 10 00:09:02.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vwzcd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8910' Aug 10 00:09:02.783: INFO: stderr: "" Aug 10 00:09:02.783: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 10 00:09:02.783: INFO: validating pod update-demo-nautilus-vwzcd Aug 10 00:09:02.787: INFO: got data: { "image": "nautilus.jpg" } Aug 10 00:09:02.787: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 10 00:09:02.787: INFO: update-demo-nautilus-vwzcd is verified up and running STEP: using delete to clean up resources Aug 10 00:09:02.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8910' Aug 10 00:09:02.898: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 10 00:09:02.898: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 10 00:09:02.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8910' Aug 10 00:09:03.010: INFO: stderr: "No resources found in kubectl-8910 namespace.\n" Aug 10 00:09:03.010: INFO: stdout: "" Aug 10 00:09:03.010: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8910 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 10 00:09:03.124: INFO: stderr: "" Aug 10 00:09:03.124: INFO: stdout: "update-demo-nautilus-prr6j\nupdate-demo-nautilus-vwzcd\n" Aug 10 00:09:03.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8910' Aug 10 00:09:03.735: INFO: stderr: "No resources found in kubectl-8910 namespace.\n" Aug 10 00:09:03.735: INFO: stdout: "" Aug 10 00:09:03.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8910 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 10 00:09:03.863: INFO: stderr: "" Aug 10 00:09:03.863: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:09:03.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8910" for this suite. • [SLOW TEST:7.335 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":120,"skipped":1832,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:09:03.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-d5327273-ac09-4ab3-a406-2b26fdf8278f Aug 10 00:09:04.162: INFO: Pod name my-hostname-basic-d5327273-ac09-4ab3-a406-2b26fdf8278f: Found 0 pods out of 1 Aug 10 00:09:09.192: INFO: Pod name my-hostname-basic-d5327273-ac09-4ab3-a406-2b26fdf8278f: Found 1 pods out of 1 Aug 10 00:09:09.192: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d5327273-ac09-4ab3-a406-2b26fdf8278f" are running Aug 10 00:09:09.195: INFO: Pod "my-hostname-basic-d5327273-ac09-4ab3-a406-2b26fdf8278f-d6p4f" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-10 00:09:04 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-10 00:09:07 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-10 00:09:07 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-10 00:09:04 +0000 UTC Reason: Message:}]) Aug 10 00:09:09.195: INFO: Trying to dial the pod Aug 10 00:09:14.205: INFO: Controller my-hostname-basic-d5327273-ac09-4ab3-a406-2b26fdf8278f: Got expected result from replica 1 [my-hostname-basic-d5327273-ac09-4ab3-a406-2b26fdf8278f-d6p4f]: "my-hostname-basic-d5327273-ac09-4ab3-a406-2b26fdf8278f-d6p4f", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:09:14.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5070" for this suite. • [SLOW TEST:10.341 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":121,"skipped":1856,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:09:14.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 10 00:09:18.888: INFO: Successfully updated pod "annotationupdatea8c1aba9-1491-4306-a279-9dc1e7c26955" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:09:20.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5893" for this suite. • [SLOW TEST:6.715 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":122,"skipped":1878,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:09:20.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:09:32.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4524" for this suite. • [SLOW TEST:11.158 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":123,"skipped":1879,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:09:32.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 10 00:09:32.193: INFO: Waiting up to 5m0s for pod "pod-5a18acc7-c9ac-4ce4-b1fc-e15f7fce6f81" in namespace "emptydir-1980" to be "Succeeded or Failed" Aug 10 00:09:32.210: INFO: Pod "pod-5a18acc7-c9ac-4ce4-b1fc-e15f7fce6f81": Phase="Pending", Reason="", readiness=false. Elapsed: 16.413459ms Aug 10 00:09:34.214: INFO: Pod "pod-5a18acc7-c9ac-4ce4-b1fc-e15f7fce6f81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021058263s Aug 10 00:09:36.219: INFO: Pod "pod-5a18acc7-c9ac-4ce4-b1fc-e15f7fce6f81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025524884s STEP: Saw pod success Aug 10 00:09:36.219: INFO: Pod "pod-5a18acc7-c9ac-4ce4-b1fc-e15f7fce6f81" satisfied condition "Succeeded or Failed" Aug 10 00:09:36.222: INFO: Trying to get logs from node latest-worker2 pod pod-5a18acc7-c9ac-4ce4-b1fc-e15f7fce6f81 container test-container: STEP: delete the pod Aug 10 00:09:36.303: INFO: Waiting for pod pod-5a18acc7-c9ac-4ce4-b1fc-e15f7fce6f81 to disappear Aug 10 00:09:36.307: INFO: Pod pod-5a18acc7-c9ac-4ce4-b1fc-e15f7fce6f81 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:09:36.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1980" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":124,"skipped":1917,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:09:36.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 10 00:09:44.487: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 10 00:09:44.560: INFO: Pod pod-with-poststart-http-hook still exists Aug 10 00:09:46.560: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 10 00:09:46.566: INFO: Pod pod-with-poststart-http-hook still exists Aug 10 00:09:48.560: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 10 00:09:48.566: INFO: Pod pod-with-poststart-http-hook still exists Aug 10 00:09:50.560: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 10 00:09:50.565: INFO: Pod pod-with-poststart-http-hook still exists Aug 10 00:09:52.560: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 10 00:09:52.565: INFO: Pod pod-with-poststart-http-hook still exists Aug 10 00:09:54.560: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 10 00:09:54.564: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:09:54.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6343" for this suite. • [SLOW TEST:18.258 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":125,"skipped":1937,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:09:54.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:09:58.775: INFO: Waiting up to 5m0s for pod "client-envvars-de06c5c1-f432-4ecb-af4f-e4f551ae8ccb" in namespace "pods-4747" to be "Succeeded or Failed" Aug 10 00:09:58.788: INFO: Pod "client-envvars-de06c5c1-f432-4ecb-af4f-e4f551ae8ccb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.779879ms Aug 10 00:10:00.793: INFO: Pod "client-envvars-de06c5c1-f432-4ecb-af4f-e4f551ae8ccb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018278485s Aug 10 00:10:02.798: INFO: Pod "client-envvars-de06c5c1-f432-4ecb-af4f-e4f551ae8ccb": Phase="Running", Reason="", readiness=true. Elapsed: 4.023303114s Aug 10 00:10:04.803: INFO: Pod "client-envvars-de06c5c1-f432-4ecb-af4f-e4f551ae8ccb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027890398s STEP: Saw pod success Aug 10 00:10:04.803: INFO: Pod "client-envvars-de06c5c1-f432-4ecb-af4f-e4f551ae8ccb" satisfied condition "Succeeded or Failed" Aug 10 00:10:04.806: INFO: Trying to get logs from node latest-worker2 pod client-envvars-de06c5c1-f432-4ecb-af4f-e4f551ae8ccb container env3cont: STEP: delete the pod Aug 10 00:10:04.877: INFO: Waiting for pod client-envvars-de06c5c1-f432-4ecb-af4f-e4f551ae8ccb to disappear Aug 10 00:10:04.889: INFO: Pod client-envvars-de06c5c1-f432-4ecb-af4f-e4f551ae8ccb no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:10:04.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4747" for this suite. • [SLOW TEST:10.324 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":126,"skipped":1974,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:10:04.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:12:05.065: INFO: Deleting pod "var-expansion-6de58811-5244-4542-a1c6-42db2b6d7ff9" in namespace "var-expansion-3965" Aug 10 00:12:05.070: INFO: Wait up to 5m0s for pod "var-expansion-6de58811-5244-4542-a1c6-42db2b6d7ff9" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:12:09.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3965" for this suite. • [SLOW TEST:124.275 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":127,"skipped":1994,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:12:09.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:12:15.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4185" for this suite. STEP: Destroying namespace "nsdeletetest-5299" for this suite. Aug 10 00:12:15.515: INFO: Namespace nsdeletetest-5299 was already deleted STEP: Destroying namespace "nsdeletetest-3120" for this suite. • [SLOW TEST:6.345 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":128,"skipped":2005,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:12:15.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7747 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7747 I0810 00:12:15.755277 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7747, replica count: 2 I0810 00:12:18.805699 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 00:12:21.805909 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 10 00:12:21.805: INFO: Creating new exec pod Aug 10 00:12:26.860: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-7747 execpodjwvgh -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 10 00:12:30.019: INFO: stderr: "I0810 00:12:29.928169 1002 log.go:181] (0xc0001ec160) (0xc000717680) Create stream\nI0810 00:12:29.928224 1002 log.go:181] (0xc0001ec160) (0xc000717680) Stream added, broadcasting: 1\nI0810 00:12:29.930199 1002 log.go:181] (0xc0001ec160) Reply frame received for 1\nI0810 00:12:29.930222 1002 log.go:181] (0xc0001ec160) (0xc000aa8320) Create stream\nI0810 00:12:29.930229 1002 log.go:181] (0xc0001ec160) (0xc000aa8320) Stream added, broadcasting: 3\nI0810 00:12:29.931240 1002 log.go:181] (0xc0001ec160) Reply frame received for 3\nI0810 00:12:29.931276 1002 log.go:181] (0xc0001ec160) (0xc000aa8c80) Create stream\nI0810 00:12:29.931288 1002 log.go:181] (0xc0001ec160) (0xc000aa8c80) Stream added, broadcasting: 5\nI0810 00:12:29.932312 1002 log.go:181] (0xc0001ec160) Reply frame received for 5\nI0810 00:12:30.011656 1002 log.go:181] (0xc0001ec160) Data frame received for 5\nI0810 00:12:30.011700 1002 log.go:181] (0xc000aa8c80) (5) Data frame handling\nI0810 00:12:30.011729 1002 log.go:181] (0xc000aa8c80) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0810 00:12:30.012346 1002 log.go:181] (0xc0001ec160) Data frame received for 5\nI0810 00:12:30.012399 1002 log.go:181] (0xc000aa8c80) (5) Data frame handling\nI0810 00:12:30.012419 1002 log.go:181] (0xc000aa8c80) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0810 00:12:30.012485 1002 log.go:181] (0xc0001ec160) Data frame received for 3\nI0810 00:12:30.012511 1002 log.go:181] (0xc000aa8320) (3) Data frame handling\nI0810 00:12:30.012634 1002 log.go:181] (0xc0001ec160) Data frame received for 5\nI0810 00:12:30.012665 1002 log.go:181] (0xc000aa8c80) (5) Data frame handling\nI0810 00:12:30.014506 1002 log.go:181] (0xc0001ec160) Data frame received for 1\nI0810 00:12:30.014524 1002 log.go:181] (0xc000717680) (1) Data frame handling\nI0810 00:12:30.014538 1002 log.go:181] (0xc000717680) (1) Data frame sent\nI0810 00:12:30.014644 1002 log.go:181] (0xc0001ec160) (0xc000717680) Stream removed, broadcasting: 1\nI0810 00:12:30.014702 1002 log.go:181] (0xc0001ec160) Go away received\nI0810 00:12:30.015079 1002 log.go:181] (0xc0001ec160) (0xc000717680) Stream removed, broadcasting: 1\nI0810 00:12:30.015099 1002 log.go:181] (0xc0001ec160) (0xc000aa8320) Stream removed, broadcasting: 3\nI0810 00:12:30.015107 1002 log.go:181] (0xc0001ec160) (0xc000aa8c80) Stream removed, broadcasting: 5\n" Aug 10 00:12:30.020: INFO: stdout: "" Aug 10 00:12:30.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-7747 execpodjwvgh -- /bin/sh -x -c nc -zv -t -w 2 10.100.90.158 80' Aug 10 00:12:30.231: INFO: stderr: "I0810 00:12:30.157687 1021 log.go:181] (0xc0005b0bb0) (0xc000a9e000) Create stream\nI0810 00:12:30.157750 1021 log.go:181] (0xc0005b0bb0) (0xc000a9e000) Stream added, broadcasting: 1\nI0810 00:12:30.163023 1021 log.go:181] (0xc0005b0bb0) Reply frame received for 1\nI0810 00:12:30.163093 1021 log.go:181] (0xc0005b0bb0) (0xc000874fa0) Create stream\nI0810 00:12:30.163113 1021 log.go:181] (0xc0005b0bb0) (0xc000874fa0) Stream added, broadcasting: 3\nI0810 00:12:30.165609 1021 log.go:181] (0xc0005b0bb0) Reply frame received for 3\nI0810 00:12:30.165639 1021 log.go:181] (0xc0005b0bb0) (0xc00086a780) Create stream\nI0810 00:12:30.165646 1021 log.go:181] (0xc0005b0bb0) (0xc00086a780) Stream added, broadcasting: 5\nI0810 00:12:30.166515 1021 log.go:181] (0xc0005b0bb0) Reply frame received for 5\nI0810 00:12:30.222417 1021 log.go:181] (0xc0005b0bb0) Data frame received for 5\nI0810 00:12:30.222466 1021 log.go:181] (0xc00086a780) (5) Data frame handling\nI0810 00:12:30.222491 1021 log.go:181] (0xc00086a780) (5) Data frame sent\nI0810 00:12:30.222510 1021 log.go:181] (0xc0005b0bb0) Data frame received for 5\nI0810 00:12:30.222523 1021 log.go:181] (0xc00086a780) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.90.158 80\nConnection to 10.100.90.158 80 port [tcp/http] succeeded!\nI0810 00:12:30.222588 1021 log.go:181] (0xc0005b0bb0) Data frame received for 3\nI0810 00:12:30.222615 1021 log.go:181] (0xc000874fa0) (3) Data frame handling\nI0810 00:12:30.223834 1021 log.go:181] (0xc0005b0bb0) Data frame received for 1\nI0810 00:12:30.223862 1021 log.go:181] (0xc000a9e000) (1) Data frame handling\nI0810 00:12:30.223881 1021 log.go:181] (0xc000a9e000) (1) Data frame sent\nI0810 00:12:30.223898 1021 log.go:181] (0xc0005b0bb0) (0xc000a9e000) Stream removed, broadcasting: 1\nI0810 00:12:30.223963 1021 log.go:181] (0xc0005b0bb0) Go away received\nI0810 00:12:30.224631 1021 log.go:181] (0xc0005b0bb0) (0xc000a9e000) Stream removed, broadcasting: 1\nI0810 00:12:30.224657 1021 log.go:181] (0xc0005b0bb0) (0xc000874fa0) Stream removed, broadcasting: 3\nI0810 00:12:30.224674 1021 log.go:181] (0xc0005b0bb0) (0xc00086a780) Stream removed, broadcasting: 5\n" Aug 10 00:12:30.231: INFO: stdout: "" Aug 10 00:12:30.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-7747 execpodjwvgh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 32594' Aug 10 00:12:30.430: INFO: stderr: "I0810 00:12:30.359753 1039 log.go:181] (0xc000b8af20) (0xc0009d8460) Create stream\nI0810 00:12:30.359811 1039 log.go:181] (0xc000b8af20) (0xc0009d8460) Stream added, broadcasting: 1\nI0810 00:12:30.364202 1039 log.go:181] (0xc000b8af20) Reply frame received for 1\nI0810 00:12:30.364245 1039 log.go:181] (0xc000b8af20) (0xc0008921e0) Create stream\nI0810 00:12:30.364258 1039 log.go:181] (0xc000b8af20) (0xc0008921e0) Stream added, broadcasting: 3\nI0810 00:12:30.365291 1039 log.go:181] (0xc000b8af20) Reply frame received for 3\nI0810 00:12:30.365328 1039 log.go:181] (0xc000b8af20) (0xc000586b40) Create stream\nI0810 00:12:30.365340 1039 log.go:181] (0xc000b8af20) (0xc000586b40) Stream added, broadcasting: 5\nI0810 00:12:30.366091 1039 log.go:181] (0xc000b8af20) Reply frame received for 5\nI0810 00:12:30.423101 1039 log.go:181] (0xc000b8af20) Data frame received for 3\nI0810 00:12:30.423134 1039 log.go:181] (0xc0008921e0) (3) Data frame handling\nI0810 00:12:30.423155 1039 log.go:181] (0xc000b8af20) Data frame received for 5\nI0810 00:12:30.423165 1039 log.go:181] (0xc000586b40) (5) Data frame handling\nI0810 00:12:30.423176 1039 log.go:181] (0xc000586b40) (5) Data frame sent\nI0810 00:12:30.423187 1039 log.go:181] (0xc000b8af20) Data frame received for 5\nI0810 00:12:30.423201 1039 log.go:181] (0xc000586b40) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.14 32594\nConnection to 172.18.0.14 32594 port [tcp/32594] succeeded!\nI0810 00:12:30.425061 1039 log.go:181] (0xc000b8af20) Data frame received for 1\nI0810 00:12:30.425094 1039 log.go:181] (0xc0009d8460) (1) Data frame handling\nI0810 00:12:30.425130 1039 log.go:181] (0xc0009d8460) (1) Data frame sent\nI0810 00:12:30.425158 1039 log.go:181] (0xc000b8af20) (0xc0009d8460) Stream removed, broadcasting: 1\nI0810 00:12:30.425176 1039 log.go:181] (0xc000b8af20) Go away received\nI0810 00:12:30.425759 1039 log.go:181] (0xc000b8af20) (0xc0009d8460) Stream removed, broadcasting: 1\nI0810 00:12:30.425786 1039 log.go:181] (0xc000b8af20) (0xc0008921e0) Stream removed, broadcasting: 3\nI0810 00:12:30.425799 1039 log.go:181] (0xc000b8af20) (0xc000586b40) Stream removed, broadcasting: 5\n" Aug 10 00:12:30.431: INFO: stdout: "" Aug 10 00:12:30.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-7747 execpodjwvgh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32594' Aug 10 00:12:30.653: INFO: stderr: "I0810 00:12:30.561599 1058 log.go:181] (0xc0008334a0) (0xc000a43b80) Create stream\nI0810 00:12:30.561676 1058 log.go:181] (0xc0008334a0) (0xc000a43b80) Stream added, broadcasting: 1\nI0810 00:12:30.566789 1058 log.go:181] (0xc0008334a0) Reply frame received for 1\nI0810 00:12:30.566823 1058 log.go:181] (0xc0008334a0) (0xc000a2e000) Create stream\nI0810 00:12:30.566835 1058 log.go:181] (0xc0008334a0) (0xc000a2e000) Stream added, broadcasting: 3\nI0810 00:12:30.567811 1058 log.go:181] (0xc0008334a0) Reply frame received for 3\nI0810 00:12:30.567834 1058 log.go:181] (0xc0008334a0) (0xc000a2a640) Create stream\nI0810 00:12:30.567841 1058 log.go:181] (0xc0008334a0) (0xc000a2a640) Stream added, broadcasting: 5\nI0810 00:12:30.568666 1058 log.go:181] (0xc0008334a0) Reply frame received for 5\nI0810 00:12:30.645778 1058 log.go:181] (0xc0008334a0) Data frame received for 5\nI0810 00:12:30.645846 1058 log.go:181] (0xc000a2a640) (5) Data frame handling\nI0810 00:12:30.645872 1058 log.go:181] (0xc000a2a640) (5) Data frame sent\nI0810 00:12:30.645888 1058 log.go:181] (0xc0008334a0) Data frame received for 5\nI0810 00:12:30.645903 1058 log.go:181] (0xc000a2a640) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 32594\nConnection to 172.18.0.12 32594 port [tcp/32594] succeeded!\nI0810 00:12:30.645980 1058 log.go:181] (0xc0008334a0) Data frame received for 3\nI0810 00:12:30.646023 1058 log.go:181] (0xc000a2e000) (3) Data frame handling\nI0810 00:12:30.647193 1058 log.go:181] (0xc0008334a0) Data frame received for 1\nI0810 00:12:30.647219 1058 log.go:181] (0xc000a43b80) (1) Data frame handling\nI0810 00:12:30.647249 1058 log.go:181] (0xc000a43b80) (1) Data frame sent\nI0810 00:12:30.647330 1058 log.go:181] (0xc0008334a0) (0xc000a43b80) Stream removed, broadcasting: 1\nI0810 00:12:30.647357 1058 log.go:181] (0xc0008334a0) Go away received\nI0810 00:12:30.647735 1058 log.go:181] (0xc0008334a0) (0xc000a43b80) Stream removed, broadcasting: 1\nI0810 00:12:30.647763 1058 log.go:181] (0xc0008334a0) (0xc000a2e000) Stream removed, broadcasting: 3\nI0810 00:12:30.647783 1058 log.go:181] (0xc0008334a0) (0xc000a2a640) Stream removed, broadcasting: 5\n" Aug 10 00:12:30.653: INFO: stdout: "" Aug 10 00:12:30.653: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:12:30.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7747" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:15.264 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":129,"skipped":2044,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:12:30.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-ce63ed9e-7011-4b15-9fdc-553c4631e5be in namespace container-probe-6956 Aug 10 00:12:34.890: INFO: Started pod busybox-ce63ed9e-7011-4b15-9fdc-553c4631e5be in namespace container-probe-6956 STEP: checking the pod's current state and verifying that restartCount is present Aug 10 00:12:34.893: INFO: Initial restart count of pod busybox-ce63ed9e-7011-4b15-9fdc-553c4631e5be is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:16:35.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6956" for this suite. • [SLOW TEST:244.847 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":130,"skipped":2101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:16:35.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-fc64227d-58d5-4a95-bf88-95624c74fe6f in namespace container-probe-4156 Aug 10 00:16:39.754: INFO: Started pod liveness-fc64227d-58d5-4a95-bf88-95624c74fe6f in namespace container-probe-4156 STEP: checking the pod's current state and verifying that restartCount is present Aug 10 00:16:39.756: INFO: Initial restart count of pod liveness-fc64227d-58d5-4a95-bf88-95624c74fe6f is 0 Aug 10 00:16:51.799: INFO: Restart count of pod container-probe-4156/liveness-fc64227d-58d5-4a95-bf88-95624c74fe6f is now 1 (12.042508249s elapsed) Aug 10 00:17:11.847: INFO: Restart count of pod container-probe-4156/liveness-fc64227d-58d5-4a95-bf88-95624c74fe6f is now 2 (32.090538221s elapsed) Aug 10 00:17:33.949: INFO: Restart count of pod container-probe-4156/liveness-fc64227d-58d5-4a95-bf88-95624c74fe6f is now 3 (54.192902272s elapsed) Aug 10 00:17:52.385: INFO: Restart count of pod container-probe-4156/liveness-fc64227d-58d5-4a95-bf88-95624c74fe6f is now 4 (1m12.62873594s elapsed) Aug 10 00:18:54.520: INFO: Restart count of pod container-probe-4156/liveness-fc64227d-58d5-4a95-bf88-95624c74fe6f is now 5 (2m14.76393996s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:18:54.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4156" for this suite. • [SLOW TEST:138.972 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":131,"skipped":2139,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:18:54.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:18:55.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1113" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":132,"skipped":2151,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:18:55.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Aug 10 00:18:55.424: INFO: Waiting up to 5m0s for pod "var-expansion-bbc099a8-25bd-4738-971b-5d652ada309a" in namespace "var-expansion-438" to be "Succeeded or Failed" Aug 10 00:18:55.437: INFO: Pod "var-expansion-bbc099a8-25bd-4738-971b-5d652ada309a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.576007ms Aug 10 00:18:57.440: INFO: Pod "var-expansion-bbc099a8-25bd-4738-971b-5d652ada309a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015610712s Aug 10 00:18:59.444: INFO: Pod "var-expansion-bbc099a8-25bd-4738-971b-5d652ada309a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0203027s STEP: Saw pod success Aug 10 00:18:59.444: INFO: Pod "var-expansion-bbc099a8-25bd-4738-971b-5d652ada309a" satisfied condition "Succeeded or Failed" Aug 10 00:18:59.447: INFO: Trying to get logs from node latest-worker2 pod var-expansion-bbc099a8-25bd-4738-971b-5d652ada309a container dapi-container: STEP: delete the pod Aug 10 00:18:59.498: INFO: Waiting for pod var-expansion-bbc099a8-25bd-4738-971b-5d652ada309a to disappear Aug 10 00:18:59.522: INFO: Pod var-expansion-bbc099a8-25bd-4738-971b-5d652ada309a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:18:59.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-438" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":133,"skipped":2217,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:18:59.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-szhbc in namespace proxy-6786 I0810 00:18:59.637583 8 runners.go:190] Created replication controller with name: proxy-service-szhbc, namespace: proxy-6786, replica count: 1 I0810 00:19:00.688020 8 runners.go:190] proxy-service-szhbc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 00:19:01.688283 8 runners.go:190] proxy-service-szhbc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 00:19:02.688505 8 runners.go:190] proxy-service-szhbc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 00:19:03.688856 8 runners.go:190] proxy-service-szhbc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0810 00:19:04.689071 8 runners.go:190] proxy-service-szhbc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0810 00:19:05.689267 8 runners.go:190] proxy-service-szhbc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0810 00:19:06.689487 8 runners.go:190] proxy-service-szhbc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0810 00:19:07.689756 8 runners.go:190] proxy-service-szhbc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0810 00:19:08.689977 8 runners.go:190] proxy-service-szhbc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0810 00:19:09.690246 8 runners.go:190] proxy-service-szhbc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0810 00:19:10.690530 8 runners.go:190] proxy-service-szhbc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0810 00:19:11.690800 8 runners.go:190] proxy-service-szhbc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 10 00:19:11.694: INFO: setup took 12.090968854s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 10 00:19:11.704: INFO: (0) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 8.713614ms) Aug 10 00:19:11.704: INFO: (0) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 10.592365ms) Aug 10 00:19:11.705: INFO: (0) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 8.896703ms) Aug 10 00:19:11.705: INFO: (0) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 10.164033ms) Aug 10 00:19:11.705: INFO: (0) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 10.556972ms) Aug 10 00:19:11.705: INFO: (0) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 9.842049ms) Aug 10 00:19:11.706: INFO: (0) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 10.613963ms) Aug 10 00:19:11.707: INFO: (0) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 11.620024ms) Aug 10 00:19:11.707: INFO: (0) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x/proxy/: test (200; 11.853287ms) Aug 10 00:19:11.707: INFO: (0) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 13.243228ms) Aug 10 00:19:11.707: INFO: (0) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 13.146825ms) Aug 10 00:19:11.710: INFO: (0) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 15.29391ms) Aug 10 00:19:11.710: INFO: (0) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 14.699862ms) Aug 10 00:19:11.711: INFO: (0) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 15.355137ms) Aug 10 00:19:11.711: INFO: (0) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 15.302016ms) Aug 10 00:19:11.713: INFO: (0) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test (200; 5.821425ms) Aug 10 00:19:11.719: INFO: (1) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 5.813524ms) Aug 10 00:19:11.719: INFO: (1) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 5.94738ms) Aug 10 00:19:11.720: INFO: (1) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 5.871762ms) Aug 10 00:19:11.720: INFO: (1) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 5.8937ms) Aug 10 00:19:11.720: INFO: (1) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test<... (200; 4.857242ms) Aug 10 00:19:11.725: INFO: (2) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 4.972733ms) Aug 10 00:19:11.726: INFO: (2) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 5.448163ms) Aug 10 00:19:11.726: INFO: (2) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 5.158983ms) Aug 10 00:19:11.726: INFO: (2) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x/proxy/: test (200; 5.354398ms) Aug 10 00:19:11.726: INFO: (2) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 5.339486ms) Aug 10 00:19:11.726: INFO: (2) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 5.745211ms) Aug 10 00:19:11.726: INFO: (2) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 5.634882ms) Aug 10 00:19:11.726: INFO: (2) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 5.687925ms) Aug 10 00:19:11.726: INFO: (2) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: ... (200; 5.9555ms) Aug 10 00:19:11.727: INFO: (2) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 6.213701ms) Aug 10 00:19:11.733: INFO: (3) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 6.074187ms) Aug 10 00:19:11.733: INFO: (3) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 6.231334ms) Aug 10 00:19:11.733: INFO: (3) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 6.297407ms) Aug 10 00:19:11.733: INFO: (3) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 6.202599ms) Aug 10 00:19:11.733: INFO: (3) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 6.341141ms) Aug 10 00:19:11.733: INFO: (3) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 6.6061ms) Aug 10 00:19:11.733: INFO: (3) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 6.641666ms) Aug 10 00:19:11.733: INFO: (3) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 6.658768ms) Aug 10 00:19:11.733: INFO: (3) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 6.651011ms) Aug 10 00:19:11.733: INFO: (3) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 6.619812ms) Aug 10 00:19:11.734: INFO: (3) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 6.80454ms) Aug 10 00:19:11.734: INFO: (3) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 6.602974ms) Aug 10 00:19:11.734: INFO: (3) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 6.884748ms) Aug 10 00:19:11.734: INFO: (3) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x/proxy/: test (200; 6.932351ms) Aug 10 00:19:11.734: INFO: (3) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 7.472512ms) Aug 10 00:19:11.735: INFO: (3) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test (200; 5.100726ms) Aug 10 00:19:11.740: INFO: (4) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 5.106459ms) Aug 10 00:19:11.740: INFO: (4) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 5.150768ms) Aug 10 00:19:11.740: INFO: (4) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 5.236322ms) Aug 10 00:19:11.740: INFO: (4) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 5.283224ms) Aug 10 00:19:11.740: INFO: (4) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 5.250133ms) Aug 10 00:19:11.740: INFO: (4) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 5.307888ms) Aug 10 00:19:11.740: INFO: (4) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 5.274267ms) Aug 10 00:19:11.740: INFO: (4) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 5.251182ms) Aug 10 00:19:11.740: INFO: (4) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 5.408538ms) Aug 10 00:19:11.740: INFO: (4) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 5.418088ms) Aug 10 00:19:11.740: INFO: (4) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test<... (200; 5.263603ms) Aug 10 00:19:11.746: INFO: (5) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 5.310524ms) Aug 10 00:19:11.746: INFO: (5) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x/proxy/: test (200; 5.308974ms) Aug 10 00:19:11.746: INFO: (5) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 5.296379ms) Aug 10 00:19:11.746: INFO: (5) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 5.330219ms) Aug 10 00:19:11.746: INFO: (5) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 5.353621ms) Aug 10 00:19:11.746: INFO: (5) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 5.525506ms) Aug 10 00:19:11.746: INFO: (5) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 5.53139ms) Aug 10 00:19:11.746: INFO: (5) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 5.381941ms) Aug 10 00:19:11.750: INFO: (6) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 3.379374ms) Aug 10 00:19:11.750: INFO: (6) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 3.074783ms) Aug 10 00:19:11.750: INFO: (6) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 4.098134ms) Aug 10 00:19:11.750: INFO: (6) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 3.459335ms) Aug 10 00:19:11.750: INFO: (6) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 4.189884ms) Aug 10 00:19:11.750: INFO: (6) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 4.106655ms) Aug 10 00:19:11.750: INFO: (6) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 4.051555ms) Aug 10 00:19:11.750: INFO: (6) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 3.729372ms) Aug 10 00:19:11.750: INFO: (6) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 3.576629ms) Aug 10 00:19:11.751: INFO: (6) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 4.355295ms) Aug 10 00:19:11.751: INFO: (6) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test (200; 4.270305ms) Aug 10 00:19:11.751: INFO: (6) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 4.64271ms) Aug 10 00:19:11.751: INFO: (6) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 5.20051ms) Aug 10 00:19:11.751: INFO: (6) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 5.165528ms) Aug 10 00:19:11.751: INFO: (6) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 5.062427ms) Aug 10 00:19:11.753: INFO: (7) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 2.013818ms) Aug 10 00:19:11.756: INFO: (7) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 4.679949ms) Aug 10 00:19:11.756: INFO: (7) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 4.644863ms) Aug 10 00:19:11.756: INFO: (7) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 4.936405ms) Aug 10 00:19:11.756: INFO: (7) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 4.91935ms) Aug 10 00:19:11.756: INFO: (7) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 4.9221ms) Aug 10 00:19:11.756: INFO: (7) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 4.978396ms) Aug 10 00:19:11.756: INFO: (7) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 4.98998ms) Aug 10 00:19:11.756: INFO: (7) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 5.101833ms) Aug 10 00:19:11.756: INFO: (7) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 5.165024ms) Aug 10 00:19:11.757: INFO: (7) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 5.467906ms) Aug 10 00:19:11.757: INFO: (7) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x/proxy/: test (200; 5.467914ms) Aug 10 00:19:11.757: INFO: (7) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 5.540569ms) Aug 10 00:19:11.757: INFO: (7) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 6.043663ms) Aug 10 00:19:11.757: INFO: (7) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test (200; 4.842361ms) Aug 10 00:19:11.762: INFO: (8) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 4.85547ms) Aug 10 00:19:11.762: INFO: (8) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 4.884941ms) Aug 10 00:19:11.762: INFO: (8) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 5.031212ms) Aug 10 00:19:11.763: INFO: (8) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 5.00745ms) Aug 10 00:19:11.765: INFO: (9) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 2.433374ms) Aug 10 00:19:11.765: INFO: (9) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test<... (200; 3.272937ms) Aug 10 00:19:11.766: INFO: (9) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 3.651687ms) Aug 10 00:19:11.766: INFO: (9) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 3.677493ms) Aug 10 00:19:11.766: INFO: (9) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 3.68975ms) Aug 10 00:19:11.766: INFO: (9) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x/proxy/: test (200; 3.732369ms) Aug 10 00:19:11.766: INFO: (9) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 3.800501ms) Aug 10 00:19:11.766: INFO: (9) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 3.749984ms) Aug 10 00:19:11.768: INFO: (9) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 5.128774ms) Aug 10 00:19:11.768: INFO: (9) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 5.249776ms) Aug 10 00:19:11.768: INFO: (9) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 5.233311ms) Aug 10 00:19:11.768: INFO: (9) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 5.298647ms) Aug 10 00:19:11.768: INFO: (9) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 5.287445ms) Aug 10 00:19:11.768: INFO: (9) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 5.367037ms) Aug 10 00:19:11.772: INFO: (10) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 3.561077ms) Aug 10 00:19:11.772: INFO: (10) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 3.593248ms) Aug 10 00:19:11.772: INFO: (10) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 3.645983ms) Aug 10 00:19:11.772: INFO: (10) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 3.594798ms) Aug 10 00:19:11.772: INFO: (10) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test (200; 3.794369ms) Aug 10 00:19:11.772: INFO: (10) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 4.101867ms) Aug 10 00:19:11.773: INFO: (10) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 4.418071ms) Aug 10 00:19:11.773: INFO: (10) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 4.534933ms) Aug 10 00:19:11.773: INFO: (10) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 4.533599ms) Aug 10 00:19:11.773: INFO: (10) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 4.551049ms) Aug 10 00:19:11.773: INFO: (10) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 4.494517ms) Aug 10 00:19:11.773: INFO: (10) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 4.575923ms) Aug 10 00:19:11.773: INFO: (10) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 4.72048ms) Aug 10 00:19:11.773: INFO: (10) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 4.76719ms) Aug 10 00:19:11.796: INFO: (11) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 22.770436ms) Aug 10 00:19:11.796: INFO: (11) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x/proxy/: test (200; 22.814916ms) Aug 10 00:19:11.796: INFO: (11) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 22.826131ms) Aug 10 00:19:11.796: INFO: (11) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test<... (200; 23.711996ms) Aug 10 00:19:11.797: INFO: (11) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 24.422131ms) Aug 10 00:19:11.797: INFO: (11) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 24.520686ms) Aug 10 00:19:11.798: INFO: (11) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 24.644573ms) Aug 10 00:19:11.798: INFO: (11) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 24.645432ms) Aug 10 00:19:11.798: INFO: (11) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 24.602376ms) Aug 10 00:19:11.798: INFO: (11) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 24.683079ms) Aug 10 00:19:11.798: INFO: (11) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 25.142404ms) Aug 10 00:19:11.798: INFO: (11) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 25.294626ms) Aug 10 00:19:11.798: INFO: (11) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 25.442204ms) Aug 10 00:19:11.798: INFO: (11) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 25.507902ms) Aug 10 00:19:11.803: INFO: (12) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 4.086308ms) Aug 10 00:19:11.804: INFO: (12) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 4.913925ms) Aug 10 00:19:11.804: INFO: (12) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 5.675928ms) Aug 10 00:19:11.804: INFO: (12) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 5.848325ms) Aug 10 00:19:11.805: INFO: (12) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 6.045676ms) Aug 10 00:19:11.805: INFO: (12) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x/proxy/: test (200; 6.214659ms) Aug 10 00:19:11.805: INFO: (12) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 6.132175ms) Aug 10 00:19:11.805: INFO: (12) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 6.357573ms) Aug 10 00:19:11.805: INFO: (12) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test<... (200; 6.58616ms) Aug 10 00:19:11.805: INFO: (12) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 6.693245ms) Aug 10 00:19:11.805: INFO: (12) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 6.762483ms) Aug 10 00:19:11.805: INFO: (12) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 6.778625ms) Aug 10 00:19:11.806: INFO: (12) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 6.943609ms) Aug 10 00:19:11.806: INFO: (12) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 6.818875ms) Aug 10 00:19:11.811: INFO: (13) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 5.514454ms) Aug 10 00:19:11.812: INFO: (13) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 5.865885ms) Aug 10 00:19:11.812: INFO: (13) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 5.937707ms) Aug 10 00:19:11.812: INFO: (13) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 5.906662ms) Aug 10 00:19:11.812: INFO: (13) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 5.894199ms) Aug 10 00:19:11.812: INFO: (13) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 6.329756ms) Aug 10 00:19:11.812: INFO: (13) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test (200; 7.251272ms) Aug 10 00:19:11.813: INFO: (13) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 7.252711ms) Aug 10 00:19:11.813: INFO: (13) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 7.174511ms) Aug 10 00:19:11.813: INFO: (13) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 7.204236ms) Aug 10 00:19:11.813: INFO: (13) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 7.213351ms) Aug 10 00:19:11.816: INFO: (14) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: ... (200; 3.394886ms) Aug 10 00:19:11.817: INFO: (14) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 3.498619ms) Aug 10 00:19:11.817: INFO: (14) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 3.686495ms) Aug 10 00:19:11.817: INFO: (14) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 3.722381ms) Aug 10 00:19:11.817: INFO: (14) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 3.94236ms) Aug 10 00:19:11.817: INFO: (14) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 4.4442ms) Aug 10 00:19:11.818: INFO: (14) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 4.549413ms) Aug 10 00:19:11.818: INFO: (14) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 4.784579ms) Aug 10 00:19:11.818: INFO: (14) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 4.80927ms) Aug 10 00:19:11.818: INFO: (14) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x/proxy/: test (200; 5.169378ms) Aug 10 00:19:11.818: INFO: (14) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 5.143087ms) Aug 10 00:19:11.818: INFO: (14) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 5.190238ms) Aug 10 00:19:11.818: INFO: (14) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 5.15648ms) Aug 10 00:19:11.818: INFO: (14) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 5.248685ms) Aug 10 00:19:11.818: INFO: (14) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 5.261437ms) Aug 10 00:19:11.822: INFO: (15) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 3.590608ms) Aug 10 00:19:11.822: INFO: (15) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 3.718754ms) Aug 10 00:19:11.831: INFO: (15) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 12.521581ms) Aug 10 00:19:11.831: INFO: (15) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 12.472811ms) Aug 10 00:19:11.831: INFO: (15) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 12.459489ms) Aug 10 00:19:11.831: INFO: (15) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 12.978877ms) Aug 10 00:19:11.832: INFO: (15) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 13.142262ms) Aug 10 00:19:11.832: INFO: (15) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 13.099402ms) Aug 10 00:19:11.832: INFO: (15) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 13.107858ms) Aug 10 00:19:11.832: INFO: (15) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test (200; 13.625161ms) Aug 10 00:19:11.834: INFO: (15) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 15.782284ms) Aug 10 00:19:11.834: INFO: (15) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 15.81088ms) Aug 10 00:19:11.834: INFO: (15) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 15.911701ms) Aug 10 00:19:11.834: INFO: (15) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 16.109045ms) Aug 10 00:19:11.835: INFO: (15) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 16.272073ms) Aug 10 00:19:11.838: INFO: (16) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 3.047341ms) Aug 10 00:19:11.838: INFO: (16) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 3.094206ms) Aug 10 00:19:11.838: INFO: (16) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 3.447861ms) Aug 10 00:19:11.838: INFO: (16) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 3.697982ms) Aug 10 00:19:11.839: INFO: (16) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 4.286476ms) Aug 10 00:19:11.839: INFO: (16) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 4.690168ms) Aug 10 00:19:11.840: INFO: (16) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x/proxy/: test (200; 4.843595ms) Aug 10 00:19:11.840: INFO: (16) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 5.642531ms) Aug 10 00:19:11.840: INFO: (16) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test (200; 1.92134ms) Aug 10 00:19:11.844: INFO: (17) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 1.914206ms) Aug 10 00:19:11.846: INFO: (17) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 3.786952ms) Aug 10 00:19:11.846: INFO: (17) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: ... (200; 3.723426ms) Aug 10 00:19:11.846: INFO: (17) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 3.719076ms) Aug 10 00:19:11.846: INFO: (17) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 4.367053ms) Aug 10 00:19:11.846: INFO: (17) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 4.298326ms) Aug 10 00:19:11.846: INFO: (17) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 3.987611ms) Aug 10 00:19:11.847: INFO: (17) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 4.086557ms) Aug 10 00:19:11.869: INFO: (17) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 25.846377ms) Aug 10 00:19:11.869: INFO: (17) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 26.423184ms) Aug 10 00:19:11.869: INFO: (17) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 25.801099ms) Aug 10 00:19:11.869: INFO: (17) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 26.884646ms) Aug 10 00:19:11.869: INFO: (17) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 27.043029ms) Aug 10 00:19:11.870: INFO: (17) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 26.830171ms) Aug 10 00:19:11.882: INFO: (18) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test<... (200; 13.057183ms) Aug 10 00:19:11.883: INFO: (18) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 13.167723ms) Aug 10 00:19:11.883: INFO: (18) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 13.350411ms) Aug 10 00:19:11.884: INFO: (18) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x/proxy/: test (200; 14.060855ms) Aug 10 00:19:11.884: INFO: (18) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 14.167285ms) Aug 10 00:19:11.884: INFO: (18) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 14.123716ms) Aug 10 00:19:11.884: INFO: (18) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 14.229889ms) Aug 10 00:19:11.884: INFO: (18) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 14.211293ms) Aug 10 00:19:11.884: INFO: (18) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 14.280552ms) Aug 10 00:19:11.884: INFO: (18) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 14.541605ms) Aug 10 00:19:11.884: INFO: (18) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 14.682114ms) Aug 10 00:19:11.884: INFO: (18) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 14.690035ms) Aug 10 00:19:11.885: INFO: (18) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 14.984063ms) Aug 10 00:19:11.885: INFO: (18) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 15.041746ms) Aug 10 00:19:11.889: INFO: (19) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:162/proxy/: bar (200; 3.969935ms) Aug 10 00:19:11.889: INFO: (19) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:162/proxy/: bar (200; 4.101041ms) Aug 10 00:19:11.889: INFO: (19) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:443/proxy/: test (200; 4.104359ms) Aug 10 00:19:11.889: INFO: (19) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:160/proxy/: foo (200; 4.088852ms) Aug 10 00:19:11.889: INFO: (19) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:460/proxy/: tls baz (200; 4.086277ms) Aug 10 00:19:11.892: INFO: (19) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:1080/proxy/: test<... (200; 7.663593ms) Aug 10 00:19:11.893: INFO: (19) /api/v1/namespaces/proxy-6786/pods/http:proxy-service-szhbc-mt89x:1080/proxy/: ... (200; 7.667799ms) Aug 10 00:19:11.893: INFO: (19) /api/v1/namespaces/proxy-6786/pods/https:proxy-service-szhbc-mt89x:462/proxy/: tls qux (200; 7.694563ms) Aug 10 00:19:11.893: INFO: (19) /api/v1/namespaces/proxy-6786/pods/proxy-service-szhbc-mt89x:160/proxy/: foo (200; 7.773278ms) Aug 10 00:19:11.894: INFO: (19) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname1/proxy/: foo (200; 8.800217ms) Aug 10 00:19:11.894: INFO: (19) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname2/proxy/: bar (200; 9.216068ms) Aug 10 00:19:11.894: INFO: (19) /api/v1/namespaces/proxy-6786/services/proxy-service-szhbc:portname2/proxy/: bar (200; 9.177559ms) Aug 10 00:19:11.894: INFO: (19) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname2/proxy/: tls qux (200; 9.234253ms) Aug 10 00:19:11.894: INFO: (19) /api/v1/namespaces/proxy-6786/services/https:proxy-service-szhbc:tlsportname1/proxy/: tls baz (200; 9.198752ms) Aug 10 00:19:11.894: INFO: (19) /api/v1/namespaces/proxy-6786/services/http:proxy-service-szhbc:portname1/proxy/: foo (200; 9.317094ms) STEP: deleting ReplicationController proxy-service-szhbc in namespace proxy-6786, will wait for the garbage collector to delete the pods Aug 10 00:19:11.979: INFO: Deleting ReplicationController proxy-service-szhbc took: 7.479943ms Aug 10 00:19:12.379: INFO: Terminating ReplicationController proxy-service-szhbc pods took: 400.221917ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:19:23.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6786" for this suite. • [SLOW TEST:24.359 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":303,"completed":134,"skipped":2233,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:19:23.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 10 00:19:27.076: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:19:27.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5679" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":135,"skipped":2273,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:19:27.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 10 00:19:27.210: INFO: Waiting up to 1m0s for all nodes to be ready Aug 10 00:20:27.232: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Aug 10 00:20:27.285: INFO: Created pod: pod0-sched-preemption-low-priority Aug 10 00:20:27.341: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:20:57.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8322" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:90.461 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":136,"skipped":2278,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:20:57.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-3665/secret-test-98106874-9675-486c-8ffe-a8712c4b5e13 STEP: Creating a pod to test consume secrets Aug 10 00:20:57.905: INFO: Waiting up to 5m0s for pod "pod-configmaps-20f8ecfb-1636-477f-9842-f7e86c205919" in namespace "secrets-3665" to be "Succeeded or Failed" Aug 10 00:20:57.958: INFO: Pod "pod-configmaps-20f8ecfb-1636-477f-9842-f7e86c205919": Phase="Pending", Reason="", readiness=false. Elapsed: 52.466425ms Aug 10 00:20:59.962: INFO: Pod "pod-configmaps-20f8ecfb-1636-477f-9842-f7e86c205919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057165908s Aug 10 00:21:01.976: INFO: Pod "pod-configmaps-20f8ecfb-1636-477f-9842-f7e86c205919": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070731917s STEP: Saw pod success Aug 10 00:21:01.976: INFO: Pod "pod-configmaps-20f8ecfb-1636-477f-9842-f7e86c205919" satisfied condition "Succeeded or Failed" Aug 10 00:21:01.978: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-20f8ecfb-1636-477f-9842-f7e86c205919 container env-test: STEP: delete the pod Aug 10 00:21:02.010: INFO: Waiting for pod pod-configmaps-20f8ecfb-1636-477f-9842-f7e86c205919 to disappear Aug 10 00:21:02.014: INFO: Pod pod-configmaps-20f8ecfb-1636-477f-9842-f7e86c205919 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:21:02.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3665" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":137,"skipped":2294,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:21:02.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-3720ca8f-2057-49df-8262-9ac93235198b STEP: Creating a pod to test consume configMaps Aug 10 00:21:02.238: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f700765-2306-4d39-8ea9-d2ddc329268c" in namespace "configmap-6457" to be "Succeeded or Failed" Aug 10 00:21:02.257: INFO: Pod "pod-configmaps-6f700765-2306-4d39-8ea9-d2ddc329268c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.791084ms Aug 10 00:21:04.501: INFO: Pod "pod-configmaps-6f700765-2306-4d39-8ea9-d2ddc329268c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263027366s Aug 10 00:21:06.505: INFO: Pod "pod-configmaps-6f700765-2306-4d39-8ea9-d2ddc329268c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.266529132s STEP: Saw pod success Aug 10 00:21:06.505: INFO: Pod "pod-configmaps-6f700765-2306-4d39-8ea9-d2ddc329268c" satisfied condition "Succeeded or Failed" Aug 10 00:21:06.513: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-6f700765-2306-4d39-8ea9-d2ddc329268c container configmap-volume-test: STEP: delete the pod Aug 10 00:21:06.682: INFO: Waiting for pod pod-configmaps-6f700765-2306-4d39-8ea9-d2ddc329268c to disappear Aug 10 00:21:06.692: INFO: Pod pod-configmaps-6f700765-2306-4d39-8ea9-d2ddc329268c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:21:06.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6457" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":138,"skipped":2305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:21:06.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:21:06.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2455" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":139,"skipped":2335,"failed":0} SSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:21:06.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:21:06.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9220" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":140,"skipped":2340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:21:06.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:21:18.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2153" for this suite. • [SLOW TEST:11.129 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":141,"skipped":2364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:21:18.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 10 00:21:18.283: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:18.290: INFO: Number of nodes with available pods: 0 Aug 10 00:21:18.290: INFO: Node latest-worker is running more than one daemon pod Aug 10 00:21:19.295: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:19.299: INFO: Number of nodes with available pods: 0 Aug 10 00:21:19.299: INFO: Node latest-worker is running more than one daemon pod Aug 10 00:21:20.577: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:20.581: INFO: Number of nodes with available pods: 0 Aug 10 00:21:20.581: INFO: Node latest-worker is running more than one daemon pod Aug 10 00:21:21.296: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:21.300: INFO: Number of nodes with available pods: 0 Aug 10 00:21:21.300: INFO: Node latest-worker is running more than one daemon pod Aug 10 00:21:22.295: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:22.298: INFO: Number of nodes with available pods: 1 Aug 10 00:21:22.298: INFO: Node latest-worker is running more than one daemon pod Aug 10 00:21:23.314: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:23.318: INFO: Number of nodes with available pods: 2 Aug 10 00:21:23.318: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 10 00:21:23.379: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:23.395: INFO: Number of nodes with available pods: 1 Aug 10 00:21:23.395: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:24.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:24.405: INFO: Number of nodes with available pods: 1 Aug 10 00:21:24.405: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:25.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:25.404: INFO: Number of nodes with available pods: 1 Aug 10 00:21:25.404: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:26.401: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:26.404: INFO: Number of nodes with available pods: 1 Aug 10 00:21:26.404: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:27.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:27.404: INFO: Number of nodes with available pods: 1 Aug 10 00:21:27.404: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:28.401: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:28.405: INFO: Number of nodes with available pods: 1 Aug 10 00:21:28.405: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:29.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:29.404: INFO: Number of nodes with available pods: 1 Aug 10 00:21:29.404: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:30.401: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:30.405: INFO: Number of nodes with available pods: 1 Aug 10 00:21:30.405: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:31.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:31.404: INFO: Number of nodes with available pods: 1 Aug 10 00:21:31.404: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:32.402: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:32.405: INFO: Number of nodes with available pods: 1 Aug 10 00:21:32.405: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:33.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:33.406: INFO: Number of nodes with available pods: 1 Aug 10 00:21:33.406: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:34.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:34.403: INFO: Number of nodes with available pods: 1 Aug 10 00:21:34.403: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:35.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:35.404: INFO: Number of nodes with available pods: 1 Aug 10 00:21:35.404: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:36.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:36.403: INFO: Number of nodes with available pods: 1 Aug 10 00:21:36.403: INFO: Node latest-worker2 is running more than one daemon pod Aug 10 00:21:37.401: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:21:37.405: INFO: Number of nodes with available pods: 2 Aug 10 00:21:37.405: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7636, will wait for the garbage collector to delete the pods Aug 10 00:21:37.468: INFO: Deleting DaemonSet.extensions daemon-set took: 7.336427ms Aug 10 00:21:37.869: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.233471ms Aug 10 00:21:43.872: INFO: Number of nodes with available pods: 0 Aug 10 00:21:43.872: INFO: Number of running nodes: 0, number of available pods: 0 Aug 10 00:21:43.875: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7636/daemonsets","resourceVersion":"5784522"},"items":null} Aug 10 00:21:43.878: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7636/pods","resourceVersion":"5784522"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:21:43.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7636" for this suite. • [SLOW TEST:25.841 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":142,"skipped":2401,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:21:43.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-0f5d8a02-57d6-455b-aa26-68cd316f72a0 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:21:44.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-196" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":143,"skipped":2422,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:21:44.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-8b1c6c63-44aa-4ace-8d6a-dcfb617dbc42 STEP: Creating a pod to test consume secrets Aug 10 00:21:44.216: INFO: Waiting up to 5m0s for pod "pod-secrets-c55af743-bd1d-4c19-9125-df1928dc497a" in namespace "secrets-9869" to be "Succeeded or Failed" Aug 10 00:21:44.220: INFO: Pod "pod-secrets-c55af743-bd1d-4c19-9125-df1928dc497a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.495698ms Aug 10 00:21:46.224: INFO: Pod "pod-secrets-c55af743-bd1d-4c19-9125-df1928dc497a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007600759s Aug 10 00:21:48.229: INFO: Pod "pod-secrets-c55af743-bd1d-4c19-9125-df1928dc497a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012291095s STEP: Saw pod success Aug 10 00:21:48.229: INFO: Pod "pod-secrets-c55af743-bd1d-4c19-9125-df1928dc497a" satisfied condition "Succeeded or Failed" Aug 10 00:21:48.231: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-c55af743-bd1d-4c19-9125-df1928dc497a container secret-env-test: STEP: delete the pod Aug 10 00:21:48.296: INFO: Waiting for pod pod-secrets-c55af743-bd1d-4c19-9125-df1928dc497a to disappear Aug 10 00:21:48.305: INFO: Pod pod-secrets-c55af743-bd1d-4c19-9125-df1928dc497a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:21:48.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9869" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":144,"skipped":2426,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:21:48.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:21:48.382: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 10 00:21:51.375: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6484 create -f -' Aug 10 00:21:54.898: INFO: stderr: "" Aug 10 00:21:54.898: INFO: stdout: "e2e-test-crd-publish-openapi-3803-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 10 00:21:54.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6484 delete e2e-test-crd-publish-openapi-3803-crds test-cr' Aug 10 00:21:55.016: INFO: stderr: "" Aug 10 00:21:55.016: INFO: stdout: "e2e-test-crd-publish-openapi-3803-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Aug 10 00:21:55.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6484 apply -f -' Aug 10 00:21:55.300: INFO: stderr: "" Aug 10 00:21:55.300: INFO: stdout: "e2e-test-crd-publish-openapi-3803-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Aug 10 00:21:55.301: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6484 delete e2e-test-crd-publish-openapi-3803-crds test-cr' Aug 10 00:21:55.457: INFO: stderr: "" Aug 10 00:21:55.457: INFO: stdout: "e2e-test-crd-publish-openapi-3803-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 10 00:21:55.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3803-crds' Aug 10 00:21:55.785: INFO: stderr: "" Aug 10 00:21:55.785: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3803-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:21:58.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6484" for this suite. • [SLOW TEST:10.438 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":145,"skipped":2442,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:21:58.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 10 00:21:58.831: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:22:07.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3746" for this suite. • [SLOW TEST:8.608 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":146,"skipped":2444,"failed":0} S ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:22:07.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:22:07.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5509" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":147,"skipped":2445,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:22:07.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7647.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7647.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7647.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7647.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 10 00:22:13.875: INFO: DNS probes using dns-test-639a5e95-5c21-4e1a-8e4b-3a7b114cd81f succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7647.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7647.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7647.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7647.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 10 00:22:21.993: INFO: File wheezy_udp@dns-test-service-3.dns-7647.svc.cluster.local from pod dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 10 00:22:21.996: INFO: File jessie_udp@dns-test-service-3.dns-7647.svc.cluster.local from pod dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 10 00:22:21.996: INFO: Lookups using dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe failed for: [wheezy_udp@dns-test-service-3.dns-7647.svc.cluster.local jessie_udp@dns-test-service-3.dns-7647.svc.cluster.local] Aug 10 00:22:27.009: INFO: File wheezy_udp@dns-test-service-3.dns-7647.svc.cluster.local from pod dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 10 00:22:27.012: INFO: File jessie_udp@dns-test-service-3.dns-7647.svc.cluster.local from pod dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 10 00:22:27.012: INFO: Lookups using dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe failed for: [wheezy_udp@dns-test-service-3.dns-7647.svc.cluster.local jessie_udp@dns-test-service-3.dns-7647.svc.cluster.local] Aug 10 00:22:32.001: INFO: File wheezy_udp@dns-test-service-3.dns-7647.svc.cluster.local from pod dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 10 00:22:32.005: INFO: File jessie_udp@dns-test-service-3.dns-7647.svc.cluster.local from pod dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 10 00:22:32.005: INFO: Lookups using dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe failed for: [wheezy_udp@dns-test-service-3.dns-7647.svc.cluster.local jessie_udp@dns-test-service-3.dns-7647.svc.cluster.local] Aug 10 00:22:37.001: INFO: File wheezy_udp@dns-test-service-3.dns-7647.svc.cluster.local from pod dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 10 00:22:37.006: INFO: File jessie_udp@dns-test-service-3.dns-7647.svc.cluster.local from pod dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 10 00:22:37.006: INFO: Lookups using dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe failed for: [wheezy_udp@dns-test-service-3.dns-7647.svc.cluster.local jessie_udp@dns-test-service-3.dns-7647.svc.cluster.local] Aug 10 00:22:42.002: INFO: File wheezy_udp@dns-test-service-3.dns-7647.svc.cluster.local from pod dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 10 00:22:42.005: INFO: Lookups using dns-7647/dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe failed for: [wheezy_udp@dns-test-service-3.dns-7647.svc.cluster.local] Aug 10 00:22:47.006: INFO: DNS probes using dns-test-83996032-a9f3-44f9-91df-8747cd5efbfe succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7647.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7647.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7647.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7647.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 10 00:22:53.697: INFO: DNS probes using dns-test-4115f7d8-2a10-4f35-ad81-91cc0d5e4283 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:22:53.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7647" for this suite. • [SLOW TEST:46.334 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":148,"skipped":2466,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:22:53.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6879.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6879.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6879.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6879.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6879.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6879.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 10 00:23:02.371: INFO: DNS probes using dns-6879/dns-test-c7fa5e99-74e5-4f5d-9fd9-88dc85e353ce succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:23:02.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6879" for this suite. • [SLOW TEST:9.002 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":149,"skipped":2470,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:23:02.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-5a53572b-a6b6-4b84-af16-aaad7a7c9e1a STEP: Creating a pod to test consume secrets Aug 10 00:23:03.147: INFO: Waiting up to 5m0s for pod "pod-secrets-29261846-fd82-4756-8d14-7750ac834c54" in namespace "secrets-5140" to be "Succeeded or Failed" Aug 10 00:23:03.151: INFO: Pod "pod-secrets-29261846-fd82-4756-8d14-7750ac834c54": Phase="Pending", Reason="", readiness=false. Elapsed: 3.748574ms Aug 10 00:23:05.195: INFO: Pod "pod-secrets-29261846-fd82-4756-8d14-7750ac834c54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047387509s Aug 10 00:23:07.198: INFO: Pod "pod-secrets-29261846-fd82-4756-8d14-7750ac834c54": Phase="Running", Reason="", readiness=true. Elapsed: 4.05089338s Aug 10 00:23:09.202: INFO: Pod "pod-secrets-29261846-fd82-4756-8d14-7750ac834c54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.054740206s STEP: Saw pod success Aug 10 00:23:09.202: INFO: Pod "pod-secrets-29261846-fd82-4756-8d14-7750ac834c54" satisfied condition "Succeeded or Failed" Aug 10 00:23:09.205: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-29261846-fd82-4756-8d14-7750ac834c54 container secret-volume-test: STEP: delete the pod Aug 10 00:23:09.231: INFO: Waiting for pod pod-secrets-29261846-fd82-4756-8d14-7750ac834c54 to disappear Aug 10 00:23:09.236: INFO: Pod pod-secrets-29261846-fd82-4756-8d14-7750ac834c54 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:23:09.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5140" for this suite. • [SLOW TEST:6.409 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":150,"skipped":2475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:23:09.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Aug 10 00:23:09.340: INFO: Waiting up to 5m0s for pod "client-containers-8dfe7932-7529-4341-8283-576d8f0d94d3" in namespace "containers-6692" to be "Succeeded or Failed" Aug 10 00:23:09.362: INFO: Pod "client-containers-8dfe7932-7529-4341-8283-576d8f0d94d3": Phase="Pending", Reason="", readiness=false. Elapsed: 22.478013ms Aug 10 00:23:11.386: INFO: Pod "client-containers-8dfe7932-7529-4341-8283-576d8f0d94d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046684341s Aug 10 00:23:13.390: INFO: Pod "client-containers-8dfe7932-7529-4341-8283-576d8f0d94d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049953292s STEP: Saw pod success Aug 10 00:23:13.390: INFO: Pod "client-containers-8dfe7932-7529-4341-8283-576d8f0d94d3" satisfied condition "Succeeded or Failed" Aug 10 00:23:13.392: INFO: Trying to get logs from node latest-worker2 pod client-containers-8dfe7932-7529-4341-8283-576d8f0d94d3 container test-container: STEP: delete the pod Aug 10 00:23:13.411: INFO: Waiting for pod client-containers-8dfe7932-7529-4341-8283-576d8f0d94d3 to disappear Aug 10 00:23:13.416: INFO: Pod client-containers-8dfe7932-7529-4341-8283-576d8f0d94d3 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:23:13.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6692" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":151,"skipped":2511,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:23:13.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:23:13.919: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:23:15.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732615793, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732615793, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732615794, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732615793, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:23:17.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732615793, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732615793, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732615794, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732615793, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:23:21.219: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Aug 10 00:23:21.242: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:23:21.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-995" for this suite. STEP: Destroying namespace "webhook-995-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.023 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":152,"skipped":2526,"failed":0} SS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:23:21.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Aug 10 00:23:21.598: INFO: created test-podtemplate-1 Aug 10 00:23:21.614: INFO: created test-podtemplate-2 Aug 10 00:23:21.619: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Aug 10 00:23:21.626: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Aug 10 00:23:21.993: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:23:22.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-7459" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":153,"skipped":2528,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:23:22.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:23:28.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3786" for this suite. • [SLOW TEST:6.287 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":154,"skipped":2550,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:23:28.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Aug 10 00:23:28.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config cluster-info' Aug 10 00:23:28.524: INFO: stderr: "" Aug 10 00:23:28.524: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:42901\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:42901/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:23:28.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9318" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":155,"skipped":2576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:23:28.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 10 00:23:28.638: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5700 /api/v1/namespaces/watch-5700/configmaps/e2e-watch-test-label-changed 46f15cac-5dc8-4f61-8d50-8b7cc4e2babe 5785278 0 2020-08-10 00:23:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-10 00:23:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 10 00:23:28.638: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5700 /api/v1/namespaces/watch-5700/configmaps/e2e-watch-test-label-changed 46f15cac-5dc8-4f61-8d50-8b7cc4e2babe 5785279 0 2020-08-10 00:23:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-10 00:23:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 10 00:23:28.639: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5700 /api/v1/namespaces/watch-5700/configmaps/e2e-watch-test-label-changed 46f15cac-5dc8-4f61-8d50-8b7cc4e2babe 5785281 0 2020-08-10 00:23:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-10 00:23:28 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 10 00:23:38.706: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5700 /api/v1/namespaces/watch-5700/configmaps/e2e-watch-test-label-changed 46f15cac-5dc8-4f61-8d50-8b7cc4e2babe 5785325 0 2020-08-10 00:23:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-10 00:23:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 10 00:23:38.707: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5700 /api/v1/namespaces/watch-5700/configmaps/e2e-watch-test-label-changed 46f15cac-5dc8-4f61-8d50-8b7cc4e2babe 5785326 0 2020-08-10 00:23:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-10 00:23:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 10 00:23:38.707: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5700 /api/v1/namespaces/watch-5700/configmaps/e2e-watch-test-label-changed 46f15cac-5dc8-4f61-8d50-8b7cc4e2babe 5785327 0 2020-08-10 00:23:28 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-10 00:23:38 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:23:38.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5700" for this suite. • [SLOW TEST:10.168 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":156,"skipped":2602,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:23:38.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0810 00:24:19.265183 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 10 00:25:21.286: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Aug 10 00:25:21.286: INFO: Deleting pod "simpletest.rc-bgkxm" in namespace "gc-1190" Aug 10 00:25:21.331: INFO: Deleting pod "simpletest.rc-hxgbg" in namespace "gc-1190" Aug 10 00:25:21.419: INFO: Deleting pod "simpletest.rc-jsswr" in namespace "gc-1190" Aug 10 00:25:21.724: INFO: Deleting pod "simpletest.rc-k477w" in namespace "gc-1190" Aug 10 00:25:21.931: INFO: Deleting pod "simpletest.rc-skdll" in namespace "gc-1190" Aug 10 00:25:22.110: INFO: Deleting pod "simpletest.rc-vd7q2" in namespace "gc-1190" Aug 10 00:25:22.235: INFO: Deleting pod "simpletest.rc-vkwtf" in namespace "gc-1190" Aug 10 00:25:22.732: INFO: Deleting pod "simpletest.rc-vxdn2" in namespace "gc-1190" Aug 10 00:25:23.248: INFO: Deleting pod "simpletest.rc-w8l7z" in namespace "gc-1190" Aug 10 00:25:23.547: INFO: Deleting pod "simpletest.rc-zxbpq" in namespace "gc-1190" [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:25:24.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1190" for this suite. • [SLOW TEST:105.427 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":157,"skipped":2654,"failed":0} S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:25:24.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:25:24.542: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:25:30.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3865" for this suite. • [SLOW TEST:6.520 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":158,"skipped":2655,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:25:30.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-5046 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Aug 10 00:25:30.844: INFO: Found 0 stateful pods, waiting for 3 Aug 10 00:25:40.937: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:25:40.937: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:25:40.937: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 10 00:25:50.850: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:25:50.850: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:25:50.850: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Aug 10 00:25:50.880: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 10 00:26:00.932: INFO: Updating stateful set ss2 Aug 10 00:26:00.977: INFO: Waiting for Pod statefulset-5046/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Aug 10 00:26:11.506: INFO: Found 2 stateful pods, waiting for 3 Aug 10 00:26:21.560: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:26:21.560: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:26:21.560: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 10 00:26:21.754: INFO: Updating stateful set ss2 Aug 10 00:26:21.928: INFO: Waiting for Pod statefulset-5046/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 10 00:26:31.935: INFO: Waiting for Pod statefulset-5046/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Aug 10 00:26:41.955: INFO: Updating stateful set ss2 Aug 10 00:26:41.983: INFO: Waiting for StatefulSet statefulset-5046/ss2 to complete update Aug 10 00:26:41.983: INFO: Waiting for Pod statefulset-5046/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 10 00:26:51.992: INFO: Deleting all statefulset in ns statefulset-5046 Aug 10 00:26:51.995: INFO: Scaling statefulset ss2 to 0 Aug 10 00:27:22.037: INFO: Waiting for statefulset status.replicas updated to 0 Aug 10 00:27:22.039: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:27:22.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5046" for this suite. • [SLOW TEST:111.410 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":159,"skipped":2663,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:27:22.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-96f75312-f9bc-47e6-bb2b-962ca0d73156 STEP: Creating a pod to test consume configMaps Aug 10 00:27:22.232: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-00a0de29-c483-470d-a4f0-1b7b17ff3df9" in namespace "projected-8338" to be "Succeeded or Failed" Aug 10 00:27:22.289: INFO: Pod "pod-projected-configmaps-00a0de29-c483-470d-a4f0-1b7b17ff3df9": Phase="Pending", Reason="", readiness=false. Elapsed: 56.617482ms Aug 10 00:27:24.293: INFO: Pod "pod-projected-configmaps-00a0de29-c483-470d-a4f0-1b7b17ff3df9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060305465s Aug 10 00:27:26.297: INFO: Pod "pod-projected-configmaps-00a0de29-c483-470d-a4f0-1b7b17ff3df9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064240405s STEP: Saw pod success Aug 10 00:27:26.297: INFO: Pod "pod-projected-configmaps-00a0de29-c483-470d-a4f0-1b7b17ff3df9" satisfied condition "Succeeded or Failed" Aug 10 00:27:26.299: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-00a0de29-c483-470d-a4f0-1b7b17ff3df9 container projected-configmap-volume-test: STEP: delete the pod Aug 10 00:27:26.513: INFO: Waiting for pod pod-projected-configmaps-00a0de29-c483-470d-a4f0-1b7b17ff3df9 to disappear Aug 10 00:27:26.559: INFO: Pod pod-projected-configmaps-00a0de29-c483-470d-a4f0-1b7b17ff3df9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:27:26.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8338" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":160,"skipped":2685,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:27:26.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-0e5ee539-8b7a-4a3d-a504-836c588a6ee9 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:27:32.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2242" for this suite. • [SLOW TEST:6.153 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":161,"skipped":2715,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:27:32.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 00:27:32.899: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d9541be-c77e-43ce-aa3f-6980b5dee0ec" in namespace "downward-api-7204" to be "Succeeded or Failed" Aug 10 00:27:32.922: INFO: Pod "downwardapi-volume-2d9541be-c77e-43ce-aa3f-6980b5dee0ec": Phase="Pending", Reason="", readiness=false. Elapsed: 22.899201ms Aug 10 00:27:34.990: INFO: Pod "downwardapi-volume-2d9541be-c77e-43ce-aa3f-6980b5dee0ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090462363s Aug 10 00:27:36.998: INFO: Pod "downwardapi-volume-2d9541be-c77e-43ce-aa3f-6980b5dee0ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.09838095s STEP: Saw pod success Aug 10 00:27:36.998: INFO: Pod "downwardapi-volume-2d9541be-c77e-43ce-aa3f-6980b5dee0ec" satisfied condition "Succeeded or Failed" Aug 10 00:27:37.001: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-2d9541be-c77e-43ce-aa3f-6980b5dee0ec container client-container: STEP: delete the pod Aug 10 00:27:37.046: INFO: Waiting for pod downwardapi-volume-2d9541be-c77e-43ce-aa3f-6980b5dee0ec to disappear Aug 10 00:27:37.085: INFO: Pod downwardapi-volume-2d9541be-c77e-43ce-aa3f-6980b5dee0ec no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:27:37.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7204" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":162,"skipped":2723,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:27:37.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2139.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2139.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2139.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2139.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2139.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2139.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2139.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 10 00:27:45.285: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:45.289: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:45.291: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:45.295: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:45.305: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:45.308: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:45.311: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:45.314: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:45.319: INFO: Lookups using dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2139.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2139.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local jessie_udp@dns-test-service-2.dns-2139.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2139.svc.cluster.local] Aug 10 00:27:50.324: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:50.327: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:50.330: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:50.333: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:50.342: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:50.344: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:50.346: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:50.348: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:50.354: INFO: Lookups using dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2139.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2139.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local jessie_udp@dns-test-service-2.dns-2139.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2139.svc.cluster.local] Aug 10 00:27:55.323: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:55.326: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:55.329: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:55.331: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:55.340: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:55.342: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:55.345: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:55.347: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:27:55.351: INFO: Lookups using dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2139.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2139.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local jessie_udp@dns-test-service-2.dns-2139.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2139.svc.cluster.local] Aug 10 00:28:00.328: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:00.332: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:00.335: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:00.338: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:00.344: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:00.346: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:00.348: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:00.350: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:00.355: INFO: Lookups using dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2139.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2139.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local jessie_udp@dns-test-service-2.dns-2139.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2139.svc.cluster.local] Aug 10 00:28:05.324: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:05.327: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:05.331: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:05.334: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:05.346: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:05.350: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:05.352: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:05.355: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:05.360: INFO: Lookups using dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2139.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2139.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local jessie_udp@dns-test-service-2.dns-2139.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2139.svc.cluster.local] Aug 10 00:28:10.324: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:10.328: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:10.332: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:10.336: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:10.350: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:10.352: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:10.354: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:10.356: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2139.svc.cluster.local from pod dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942: the server could not find the requested resource (get pods dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942) Aug 10 00:28:10.361: INFO: Lookups using dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2139.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2139.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2139.svc.cluster.local jessie_udp@dns-test-service-2.dns-2139.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2139.svc.cluster.local] Aug 10 00:28:15.359: INFO: DNS probes using dns-2139/dns-test-ab19812d-6f5c-48f4-813c-e29971bb4942 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:28:15.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2139" for this suite. • [SLOW TEST:38.737 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":163,"skipped":2737,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:28:15.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 10 00:28:16.041: INFO: Waiting up to 5m0s for pod "pod-0a013c2c-387e-49de-bba2-b6220c961c08" in namespace "emptydir-1729" to be "Succeeded or Failed" Aug 10 00:28:16.080: INFO: Pod "pod-0a013c2c-387e-49de-bba2-b6220c961c08": Phase="Pending", Reason="", readiness=false. Elapsed: 38.814006ms Aug 10 00:28:18.102: INFO: Pod "pod-0a013c2c-387e-49de-bba2-b6220c961c08": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060880021s Aug 10 00:28:20.106: INFO: Pod "pod-0a013c2c-387e-49de-bba2-b6220c961c08": Phase="Running", Reason="", readiness=true. Elapsed: 4.065074606s Aug 10 00:28:22.110: INFO: Pod "pod-0a013c2c-387e-49de-bba2-b6220c961c08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069478555s STEP: Saw pod success Aug 10 00:28:22.110: INFO: Pod "pod-0a013c2c-387e-49de-bba2-b6220c961c08" satisfied condition "Succeeded or Failed" Aug 10 00:28:22.114: INFO: Trying to get logs from node latest-worker2 pod pod-0a013c2c-387e-49de-bba2-b6220c961c08 container test-container: STEP: delete the pod Aug 10 00:28:22.183: INFO: Waiting for pod pod-0a013c2c-387e-49de-bba2-b6220c961c08 to disappear Aug 10 00:28:22.192: INFO: Pod pod-0a013c2c-387e-49de-bba2-b6220c961c08 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:28:22.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1729" for this suite. • [SLOW TEST:6.368 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":164,"skipped":2769,"failed":0} [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:28:22.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:28:22.261: INFO: Creating deployment "test-recreate-deployment" Aug 10 00:28:22.271: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 10 00:28:22.314: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 10 00:28:24.323: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 10 00:28:24.326: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616102, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616102, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616102, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616102, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:28:26.331: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 10 00:28:26.339: INFO: Updating deployment test-recreate-deployment Aug 10 00:28:26.339: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 10 00:28:26.965: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3284 /apis/apps/v1/namespaces/deployment-3284/deployments/test-recreate-deployment 0f763979-d8e3-4d94-bceb-723c1bb7dc7c 5786928 2 2020-08-10 00:28:22 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-10 00:28:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-10 00:28:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005581af8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-10 00:28:26 +0000 UTC,LastTransitionTime:2020-08-10 00:28:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-08-10 00:28:26 +0000 UTC,LastTransitionTime:2020-08-10 00:28:22 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Aug 10 00:28:27.124: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-3284 /apis/apps/v1/namespaces/deployment-3284/replicasets/test-recreate-deployment-f79dd4667 91e154a7-5142-4113-80c6-0437b39bc7c9 5786926 1 2020-08-10 00:28:26 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 0f763979-d8e3-4d94-bceb-723c1bb7dc7c 0xc0043fc000 0xc0043fc001}] [] [{kube-controller-manager Update apps/v1 2020-08-10 00:28:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f763979-d8e3-4d94-bceb-723c1bb7dc7c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0043fc078 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 10 00:28:27.124: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 10 00:28:27.124: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-3284 /apis/apps/v1/namespaces/deployment-3284/replicasets/test-recreate-deployment-c96cf48f aaf74cf5-72c8-4975-8176-dfd3ef4966bb 5786917 2 2020-08-10 00:28:22 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 0f763979-d8e3-4d94-bceb-723c1bb7dc7c 0xc005581f0f 0xc005581f20}] [] [{kube-controller-manager Update apps/v1 2020-08-10 00:28:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f763979-d8e3-4d94-bceb-723c1bb7dc7c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005581f98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 10 00:28:27.127: INFO: Pod "test-recreate-deployment-f79dd4667-x2ghf" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-x2ghf test-recreate-deployment-f79dd4667- deployment-3284 /api/v1/namespaces/deployment-3284/pods/test-recreate-deployment-f79dd4667-x2ghf 853cf971-37fa-4611-84dd-770d1b432fbf 5786929 0 2020-08-10 00:28:26 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 91e154a7-5142-4113-80c6-0437b39bc7c9 0xc0050fc700 0xc0050fc701}] [] [{kube-controller-manager Update v1 2020-08-10 00:28:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"91e154a7-5142-4113-80c6-0437b39bc7c9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-10 00:28:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vldbt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vldbt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vldbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:28:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:28:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:28:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:28:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-08-10 00:28:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:28:27.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3284" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":165,"skipped":2769,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:28:27.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 10 00:28:27.512: INFO: Waiting up to 5m0s for pod "pod-9745cce9-a7e1-4c61-aea5-3786dddfe2ce" in namespace "emptydir-7985" to be "Succeeded or Failed" Aug 10 00:28:27.515: INFO: Pod "pod-9745cce9-a7e1-4c61-aea5-3786dddfe2ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.808162ms Aug 10 00:28:29.613: INFO: Pod "pod-9745cce9-a7e1-4c61-aea5-3786dddfe2ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101413765s Aug 10 00:28:31.625: INFO: Pod "pod-9745cce9-a7e1-4c61-aea5-3786dddfe2ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113528036s Aug 10 00:28:33.629: INFO: Pod "pod-9745cce9-a7e1-4c61-aea5-3786dddfe2ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117612686s STEP: Saw pod success Aug 10 00:28:33.629: INFO: Pod "pod-9745cce9-a7e1-4c61-aea5-3786dddfe2ce" satisfied condition "Succeeded or Failed" Aug 10 00:28:33.633: INFO: Trying to get logs from node latest-worker2 pod pod-9745cce9-a7e1-4c61-aea5-3786dddfe2ce container test-container: STEP: delete the pod Aug 10 00:28:33.735: INFO: Waiting for pod pod-9745cce9-a7e1-4c61-aea5-3786dddfe2ce to disappear Aug 10 00:28:33.750: INFO: Pod pod-9745cce9-a7e1-4c61-aea5-3786dddfe2ce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:28:33.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7985" for this suite. • [SLOW TEST:6.623 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":166,"skipped":2773,"failed":0} [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:28:33.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 00:28:33.819: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac04c59d-681a-475b-9e7e-e3dbfd2c5c76" in namespace "downward-api-9917" to be "Succeeded or Failed" Aug 10 00:28:33.944: INFO: Pod "downwardapi-volume-ac04c59d-681a-475b-9e7e-e3dbfd2c5c76": Phase="Pending", Reason="", readiness=false. Elapsed: 124.818663ms Aug 10 00:28:35.947: INFO: Pod "downwardapi-volume-ac04c59d-681a-475b-9e7e-e3dbfd2c5c76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128697966s Aug 10 00:28:37.951: INFO: Pod "downwardapi-volume-ac04c59d-681a-475b-9e7e-e3dbfd2c5c76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132278548s STEP: Saw pod success Aug 10 00:28:37.951: INFO: Pod "downwardapi-volume-ac04c59d-681a-475b-9e7e-e3dbfd2c5c76" satisfied condition "Succeeded or Failed" Aug 10 00:28:37.954: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ac04c59d-681a-475b-9e7e-e3dbfd2c5c76 container client-container: STEP: delete the pod Aug 10 00:28:38.016: INFO: Waiting for pod downwardapi-volume-ac04c59d-681a-475b-9e7e-e3dbfd2c5c76 to disappear Aug 10 00:28:38.018: INFO: Pod downwardapi-volume-ac04c59d-681a-475b-9e7e-e3dbfd2c5c76 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:28:38.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9917" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":167,"skipped":2773,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:28:38.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 00:28:38.167: INFO: Waiting up to 5m0s for pod "downwardapi-volume-756cfe90-3747-4b6a-95f2-bbd0c1dfd77a" in namespace "projected-6268" to be "Succeeded or Failed" Aug 10 00:28:38.174: INFO: Pod "downwardapi-volume-756cfe90-3747-4b6a-95f2-bbd0c1dfd77a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.485783ms Aug 10 00:28:40.249: INFO: Pod "downwardapi-volume-756cfe90-3747-4b6a-95f2-bbd0c1dfd77a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081842854s Aug 10 00:28:42.253: INFO: Pod "downwardapi-volume-756cfe90-3747-4b6a-95f2-bbd0c1dfd77a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086090679s STEP: Saw pod success Aug 10 00:28:42.253: INFO: Pod "downwardapi-volume-756cfe90-3747-4b6a-95f2-bbd0c1dfd77a" satisfied condition "Succeeded or Failed" Aug 10 00:28:42.256: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-756cfe90-3747-4b6a-95f2-bbd0c1dfd77a container client-container: STEP: delete the pod Aug 10 00:28:42.295: INFO: Waiting for pod downwardapi-volume-756cfe90-3747-4b6a-95f2-bbd0c1dfd77a to disappear Aug 10 00:28:42.299: INFO: Pod downwardapi-volume-756cfe90-3747-4b6a-95f2-bbd0c1dfd77a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:28:42.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6268" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":168,"skipped":2773,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:28:42.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:28:42.477: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e80e260c-4126-431a-9a90-4cfeca7c1d4b", Controller:(*bool)(0xc005c34192), BlockOwnerDeletion:(*bool)(0xc005c34193)}} Aug 10 00:28:42.529: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"f136c190-808d-4511-9339-ded066df208c", Controller:(*bool)(0xc0043fcf5a), BlockOwnerDeletion:(*bool)(0xc0043fcf5b)}} Aug 10 00:28:42.535: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5f264a88-f549-45e5-ab42-a02d36ba38f1", Controller:(*bool)(0xc005c3437a), BlockOwnerDeletion:(*bool)(0xc005c3437b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:28:47.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3845" for this suite. • [SLOW TEST:5.253 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":169,"skipped":2778,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:28:47.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 00:28:47.651: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82a58dbf-dfae-4758-a6e0-e8dc36459981" in namespace "downward-api-1762" to be "Succeeded or Failed" Aug 10 00:28:47.673: INFO: Pod "downwardapi-volume-82a58dbf-dfae-4758-a6e0-e8dc36459981": Phase="Pending", Reason="", readiness=false. Elapsed: 22.438236ms Aug 10 00:28:49.678: INFO: Pod "downwardapi-volume-82a58dbf-dfae-4758-a6e0-e8dc36459981": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026706214s Aug 10 00:28:51.682: INFO: Pod "downwardapi-volume-82a58dbf-dfae-4758-a6e0-e8dc36459981": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030853248s STEP: Saw pod success Aug 10 00:28:51.682: INFO: Pod "downwardapi-volume-82a58dbf-dfae-4758-a6e0-e8dc36459981" satisfied condition "Succeeded or Failed" Aug 10 00:28:51.685: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-82a58dbf-dfae-4758-a6e0-e8dc36459981 container client-container: STEP: delete the pod Aug 10 00:28:51.709: INFO: Waiting for pod downwardapi-volume-82a58dbf-dfae-4758-a6e0-e8dc36459981 to disappear Aug 10 00:28:51.713: INFO: Pod downwardapi-volume-82a58dbf-dfae-4758-a6e0-e8dc36459981 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:28:51.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1762" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":170,"skipped":2788,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:28:51.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 10 00:28:56.346: INFO: Successfully updated pod "pod-update-a3ffe957-6980-44a3-9400-67b5e6a32525" STEP: verifying the updated pod is in kubernetes Aug 10 00:28:56.356: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:28:56.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1303" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":171,"skipped":2812,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:28:56.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-2c0dbfe2-8241-43ba-8929-6f100f82542d STEP: Creating a pod to test consume configMaps Aug 10 00:28:56.518: INFO: Waiting up to 5m0s for pod "pod-configmaps-990f9f53-6a77-42d7-bd1b-a935cda66236" in namespace "configmap-6231" to be "Succeeded or Failed" Aug 10 00:28:56.523: INFO: Pod "pod-configmaps-990f9f53-6a77-42d7-bd1b-a935cda66236": Phase="Pending", Reason="", readiness=false. Elapsed: 5.750454ms Aug 10 00:28:58.527: INFO: Pod "pod-configmaps-990f9f53-6a77-42d7-bd1b-a935cda66236": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009572357s Aug 10 00:29:00.532: INFO: Pod "pod-configmaps-990f9f53-6a77-42d7-bd1b-a935cda66236": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014080911s STEP: Saw pod success Aug 10 00:29:00.532: INFO: Pod "pod-configmaps-990f9f53-6a77-42d7-bd1b-a935cda66236" satisfied condition "Succeeded or Failed" Aug 10 00:29:00.535: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-990f9f53-6a77-42d7-bd1b-a935cda66236 container configmap-volume-test: STEP: delete the pod Aug 10 00:29:00.569: INFO: Waiting for pod pod-configmaps-990f9f53-6a77-42d7-bd1b-a935cda66236 to disappear Aug 10 00:29:00.584: INFO: Pod pod-configmaps-990f9f53-6a77-42d7-bd1b-a935cda66236 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:29:00.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6231" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":172,"skipped":2818,"failed":0} ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:29:00.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 10 00:29:05.453: INFO: Successfully updated pod "pod-update-activedeadlineseconds-55cebc40-47ec-4dc1-88c1-7ecaaebf29ef" Aug 10 00:29:05.453: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-55cebc40-47ec-4dc1-88c1-7ecaaebf29ef" in namespace "pods-1255" to be "terminated due to deadline exceeded" Aug 10 00:29:05.468: INFO: Pod "pod-update-activedeadlineseconds-55cebc40-47ec-4dc1-88c1-7ecaaebf29ef": Phase="Running", Reason="", readiness=true. Elapsed: 15.038619ms Aug 10 00:29:07.473: INFO: Pod "pod-update-activedeadlineseconds-55cebc40-47ec-4dc1-88c1-7ecaaebf29ef": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.019197518s Aug 10 00:29:07.473: INFO: Pod "pod-update-activedeadlineseconds-55cebc40-47ec-4dc1-88c1-7ecaaebf29ef" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:29:07.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1255" for this suite. • [SLOW TEST:6.890 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":173,"skipped":2818,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:29:07.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 10 00:29:13.594: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8928 PodName:pod-sharedvolume-3db64b4c-a24d-4d34-90f9-ee2ff583c7a5 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:29:13.594: INFO: >>> kubeConfig: /root/.kube/config I0810 00:29:13.629844 8 log.go:181] (0xc000043ad0) (0xc002e7f860) Create stream I0810 00:29:13.629872 8 log.go:181] (0xc000043ad0) (0xc002e7f860) Stream added, broadcasting: 1 I0810 00:29:13.632569 8 log.go:181] (0xc000043ad0) Reply frame received for 1 I0810 00:29:13.632598 8 log.go:181] (0xc000043ad0) (0xc0034681e0) Create stream I0810 00:29:13.632607 8 log.go:181] (0xc000043ad0) (0xc0034681e0) Stream added, broadcasting: 3 I0810 00:29:13.633472 8 log.go:181] (0xc000043ad0) Reply frame received for 3 I0810 00:29:13.633491 8 log.go:181] (0xc000043ad0) (0xc002e7f900) Create stream I0810 00:29:13.633502 8 log.go:181] (0xc000043ad0) (0xc002e7f900) Stream added, broadcasting: 5 I0810 00:29:13.634386 8 log.go:181] (0xc000043ad0) Reply frame received for 5 I0810 00:29:13.708621 8 log.go:181] (0xc000043ad0) Data frame received for 5 I0810 00:29:13.708672 8 log.go:181] (0xc002e7f900) (5) Data frame handling I0810 00:29:13.708711 8 log.go:181] (0xc000043ad0) Data frame received for 3 I0810 00:29:13.708849 8 log.go:181] (0xc0034681e0) (3) Data frame handling I0810 00:29:13.708884 8 log.go:181] (0xc0034681e0) (3) Data frame sent I0810 00:29:13.708918 8 log.go:181] (0xc000043ad0) Data frame received for 3 I0810 00:29:13.708939 8 log.go:181] (0xc0034681e0) (3) Data frame handling I0810 00:29:13.711907 8 log.go:181] (0xc000043ad0) Data frame received for 1 I0810 00:29:13.711936 8 log.go:181] (0xc002e7f860) (1) Data frame handling I0810 00:29:13.711951 8 log.go:181] (0xc002e7f860) (1) Data frame sent I0810 00:29:13.711965 8 log.go:181] (0xc000043ad0) (0xc002e7f860) Stream removed, broadcasting: 1 I0810 00:29:13.712092 8 log.go:181] (0xc000043ad0) (0xc002e7f860) Stream removed, broadcasting: 1 I0810 00:29:13.712114 8 log.go:181] (0xc000043ad0) (0xc0034681e0) Stream removed, broadcasting: 3 I0810 00:29:13.712350 8 log.go:181] (0xc000043ad0) (0xc002e7f900) Stream removed, broadcasting: 5 Aug 10 00:29:13.712: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:29:13.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0810 00:29:13.713007 8 log.go:181] (0xc000043ad0) Go away received STEP: Destroying namespace "emptydir-8928" for this suite. • [SLOW TEST:6.240 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":174,"skipped":2823,"failed":0} SSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:29:13.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:29:13.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5454" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":175,"skipped":2830,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:29:13.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 10 00:29:18.559: INFO: Successfully updated pod "labelsupdate71bd4a94-edc4-46e8-a37d-9e443616244c" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:29:20.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6479" for this suite. • [SLOW TEST:6.712 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":176,"skipped":2858,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:29:20.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 00:29:20.697: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd51712d-e1e5-4b4b-a89c-443e6cc66937" in namespace "downward-api-7940" to be "Succeeded or Failed" Aug 10 00:29:20.702: INFO: Pod "downwardapi-volume-cd51712d-e1e5-4b4b-a89c-443e6cc66937": Phase="Pending", Reason="", readiness=false. Elapsed: 4.723551ms Aug 10 00:29:22.706: INFO: Pod "downwardapi-volume-cd51712d-e1e5-4b4b-a89c-443e6cc66937": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0091147s Aug 10 00:29:24.711: INFO: Pod "downwardapi-volume-cd51712d-e1e5-4b4b-a89c-443e6cc66937": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013905451s Aug 10 00:29:26.715: INFO: Pod "downwardapi-volume-cd51712d-e1e5-4b4b-a89c-443e6cc66937": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017967497s STEP: Saw pod success Aug 10 00:29:26.715: INFO: Pod "downwardapi-volume-cd51712d-e1e5-4b4b-a89c-443e6cc66937" satisfied condition "Succeeded or Failed" Aug 10 00:29:26.718: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-cd51712d-e1e5-4b4b-a89c-443e6cc66937 container client-container: STEP: delete the pod Aug 10 00:29:26.740: INFO: Waiting for pod downwardapi-volume-cd51712d-e1e5-4b4b-a89c-443e6cc66937 to disappear Aug 10 00:29:26.744: INFO: Pod downwardapi-volume-cd51712d-e1e5-4b4b-a89c-443e6cc66937 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:29:26.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7940" for this suite. • [SLOW TEST:6.112 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":177,"skipped":2877,"failed":0} SSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:29:26.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7037 Aug 10 00:29:30.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-7037 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Aug 10 00:29:31.134: INFO: stderr: "I0810 00:29:31.021414 1184 log.go:181] (0xc0009defd0) (0xc000bd5ae0) Create stream\nI0810 00:29:31.021505 1184 log.go:181] (0xc0009defd0) (0xc000bd5ae0) Stream added, broadcasting: 1\nI0810 00:29:31.025067 1184 log.go:181] (0xc0009defd0) Reply frame received for 1\nI0810 00:29:31.025133 1184 log.go:181] (0xc0009defd0) (0xc0004cfd60) Create stream\nI0810 00:29:31.025160 1184 log.go:181] (0xc0009defd0) (0xc0004cfd60) Stream added, broadcasting: 3\nI0810 00:29:31.026936 1184 log.go:181] (0xc0009defd0) Reply frame received for 3\nI0810 00:29:31.026997 1184 log.go:181] (0xc0009defd0) (0xc0003c2780) Create stream\nI0810 00:29:31.027020 1184 log.go:181] (0xc0009defd0) (0xc0003c2780) Stream added, broadcasting: 5\nI0810 00:29:31.027880 1184 log.go:181] (0xc0009defd0) Reply frame received for 5\nI0810 00:29:31.119674 1184 log.go:181] (0xc0009defd0) Data frame received for 5\nI0810 00:29:31.119697 1184 log.go:181] (0xc0003c2780) (5) Data frame handling\nI0810 00:29:31.119706 1184 log.go:181] (0xc0003c2780) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0810 00:29:31.124586 1184 log.go:181] (0xc0009defd0) Data frame received for 3\nI0810 00:29:31.124614 1184 log.go:181] (0xc0004cfd60) (3) Data frame handling\nI0810 00:29:31.124644 1184 log.go:181] (0xc0004cfd60) (3) Data frame sent\nI0810 00:29:31.125423 1184 log.go:181] (0xc0009defd0) Data frame received for 5\nI0810 00:29:31.125451 1184 log.go:181] (0xc0003c2780) (5) Data frame handling\nI0810 00:29:31.125514 1184 log.go:181] (0xc0009defd0) Data frame received for 3\nI0810 00:29:31.125538 1184 log.go:181] (0xc0004cfd60) (3) Data frame handling\nI0810 00:29:31.127553 1184 log.go:181] (0xc0009defd0) Data frame received for 1\nI0810 00:29:31.127582 1184 log.go:181] (0xc000bd5ae0) (1) Data frame handling\nI0810 00:29:31.127611 1184 log.go:181] (0xc000bd5ae0) (1) Data frame sent\nI0810 00:29:31.127631 1184 log.go:181] (0xc0009defd0) (0xc000bd5ae0) Stream removed, broadcasting: 1\nI0810 00:29:31.127652 1184 log.go:181] (0xc0009defd0) Go away received\nI0810 00:29:31.128067 1184 log.go:181] (0xc0009defd0) (0xc000bd5ae0) Stream removed, broadcasting: 1\nI0810 00:29:31.128090 1184 log.go:181] (0xc0009defd0) (0xc0004cfd60) Stream removed, broadcasting: 3\nI0810 00:29:31.128102 1184 log.go:181] (0xc0009defd0) (0xc0003c2780) Stream removed, broadcasting: 5\n" Aug 10 00:29:31.134: INFO: stdout: "iptables" Aug 10 00:29:31.134: INFO: proxyMode: iptables Aug 10 00:29:31.139: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:29:31.158: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:29:33.158: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:29:33.163: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:29:35.158: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:29:35.163: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:29:37.158: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:29:37.163: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:29:39.158: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:29:39.163: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:29:41.158: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:29:41.163: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:29:43.158: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:29:43.163: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:29:45.158: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:29:45.163: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-7037 STEP: creating replication controller affinity-clusterip-timeout in namespace services-7037 I0810 00:29:45.238131 8 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-7037, replica count: 3 I0810 00:29:48.288567 8 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 00:29:51.288885 8 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 10 00:29:51.293: INFO: Creating new exec pod Aug 10 00:29:56.313: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-7037 execpod-affinityvsdws -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Aug 10 00:29:56.550: INFO: stderr: "I0810 00:29:56.438077 1202 log.go:181] (0xc00003ab00) (0xc0008754a0) Create stream\nI0810 00:29:56.438145 1202 log.go:181] (0xc00003ab00) (0xc0008754a0) Stream added, broadcasting: 1\nI0810 00:29:56.439977 1202 log.go:181] (0xc00003ab00) Reply frame received for 1\nI0810 00:29:56.440021 1202 log.go:181] (0xc00003ab00) (0xc00035e3c0) Create stream\nI0810 00:29:56.440043 1202 log.go:181] (0xc00003ab00) (0xc00035e3c0) Stream added, broadcasting: 3\nI0810 00:29:56.441094 1202 log.go:181] (0xc00003ab00) Reply frame received for 3\nI0810 00:29:56.441132 1202 log.go:181] (0xc00003ab00) (0xc000a128c0) Create stream\nI0810 00:29:56.441153 1202 log.go:181] (0xc00003ab00) (0xc000a128c0) Stream added, broadcasting: 5\nI0810 00:29:56.442051 1202 log.go:181] (0xc00003ab00) Reply frame received for 5\nI0810 00:29:56.543077 1202 log.go:181] (0xc00003ab00) Data frame received for 5\nI0810 00:29:56.543118 1202 log.go:181] (0xc000a128c0) (5) Data frame handling\nI0810 00:29:56.543142 1202 log.go:181] (0xc000a128c0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI0810 00:29:56.543180 1202 log.go:181] (0xc00003ab00) Data frame received for 5\nI0810 00:29:56.543217 1202 log.go:181] (0xc000a128c0) (5) Data frame handling\nI0810 00:29:56.543246 1202 log.go:181] (0xc00003ab00) Data frame received for 3\nI0810 00:29:56.543261 1202 log.go:181] (0xc00035e3c0) (3) Data frame handling\nI0810 00:29:56.545192 1202 log.go:181] (0xc00003ab00) Data frame received for 1\nI0810 00:29:56.545248 1202 log.go:181] (0xc0008754a0) (1) Data frame handling\nI0810 00:29:56.545275 1202 log.go:181] (0xc0008754a0) (1) Data frame sent\nI0810 00:29:56.545331 1202 log.go:181] (0xc00003ab00) (0xc0008754a0) Stream removed, broadcasting: 1\nI0810 00:29:56.545373 1202 log.go:181] (0xc00003ab00) Go away received\nI0810 00:29:56.545884 1202 log.go:181] (0xc00003ab00) (0xc0008754a0) Stream removed, broadcasting: 1\nI0810 00:29:56.545911 1202 log.go:181] (0xc00003ab00) (0xc00035e3c0) Stream removed, broadcasting: 3\nI0810 00:29:56.545922 1202 log.go:181] (0xc00003ab00) (0xc000a128c0) Stream removed, broadcasting: 5\n" Aug 10 00:29:56.551: INFO: stdout: "" Aug 10 00:29:56.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-7037 execpod-affinityvsdws -- /bin/sh -x -c nc -zv -t -w 2 10.107.222.148 80' Aug 10 00:29:56.764: INFO: stderr: "I0810 00:29:56.689280 1220 log.go:181] (0xc00003b970) (0xc0007a9540) Create stream\nI0810 00:29:56.689381 1220 log.go:181] (0xc00003b970) (0xc0007a9540) Stream added, broadcasting: 1\nI0810 00:29:56.695013 1220 log.go:181] (0xc00003b970) Reply frame received for 1\nI0810 00:29:56.695057 1220 log.go:181] (0xc00003b970) (0xc0007a95e0) Create stream\nI0810 00:29:56.695069 1220 log.go:181] (0xc00003b970) (0xc0007a95e0) Stream added, broadcasting: 3\nI0810 00:29:56.695981 1220 log.go:181] (0xc00003b970) Reply frame received for 3\nI0810 00:29:56.696031 1220 log.go:181] (0xc00003b970) (0xc00055eb40) Create stream\nI0810 00:29:56.696056 1220 log.go:181] (0xc00003b970) (0xc00055eb40) Stream added, broadcasting: 5\nI0810 00:29:56.696823 1220 log.go:181] (0xc00003b970) Reply frame received for 5\nI0810 00:29:56.756209 1220 log.go:181] (0xc00003b970) Data frame received for 3\nI0810 00:29:56.756234 1220 log.go:181] (0xc0007a95e0) (3) Data frame handling\nI0810 00:29:56.756261 1220 log.go:181] (0xc00003b970) Data frame received for 5\nI0810 00:29:56.756294 1220 log.go:181] (0xc00055eb40) (5) Data frame handling\nI0810 00:29:56.756329 1220 log.go:181] (0xc00055eb40) (5) Data frame sent\nI0810 00:29:56.756354 1220 log.go:181] (0xc00003b970) Data frame received for 5\nI0810 00:29:56.756373 1220 log.go:181] (0xc00055eb40) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.222.148 80\nConnection to 10.107.222.148 80 port [tcp/http] succeeded!\nI0810 00:29:56.758141 1220 log.go:181] (0xc00003b970) Data frame received for 1\nI0810 00:29:56.758182 1220 log.go:181] (0xc0007a9540) (1) Data frame handling\nI0810 00:29:56.758195 1220 log.go:181] (0xc0007a9540) (1) Data frame sent\nI0810 00:29:56.758209 1220 log.go:181] (0xc00003b970) (0xc0007a9540) Stream removed, broadcasting: 1\nI0810 00:29:56.758232 1220 log.go:181] (0xc00003b970) Go away received\nI0810 00:29:56.758796 1220 log.go:181] (0xc00003b970) (0xc0007a9540) Stream removed, broadcasting: 1\nI0810 00:29:56.758819 1220 log.go:181] (0xc00003b970) (0xc0007a95e0) Stream removed, broadcasting: 3\nI0810 00:29:56.758837 1220 log.go:181] (0xc00003b970) (0xc00055eb40) Stream removed, broadcasting: 5\n" Aug 10 00:29:56.764: INFO: stdout: "" Aug 10 00:29:56.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-7037 execpod-affinityvsdws -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.107.222.148:80/ ; done' Aug 10 00:29:57.080: INFO: stderr: "I0810 00:29:56.894995 1238 log.go:181] (0xc000abd290) (0xc000991a40) Create stream\nI0810 00:29:56.895059 1238 log.go:181] (0xc000abd290) (0xc000991a40) Stream added, broadcasting: 1\nI0810 00:29:56.900168 1238 log.go:181] (0xc000abd290) Reply frame received for 1\nI0810 00:29:56.900215 1238 log.go:181] (0xc000abd290) (0xc0008270e0) Create stream\nI0810 00:29:56.900227 1238 log.go:181] (0xc000abd290) (0xc0008270e0) Stream added, broadcasting: 3\nI0810 00:29:56.901092 1238 log.go:181] (0xc000abd290) Reply frame received for 3\nI0810 00:29:56.901125 1238 log.go:181] (0xc000abd290) (0xc000462aa0) Create stream\nI0810 00:29:56.901135 1238 log.go:181] (0xc000abd290) (0xc000462aa0) Stream added, broadcasting: 5\nI0810 00:29:56.902079 1238 log.go:181] (0xc000abd290) Reply frame received for 5\nI0810 00:29:56.973121 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:56.973152 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:56.973162 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:56.973176 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:56.973183 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:56.973192 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:56.978342 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:56.978370 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:56.978397 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:56.979079 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:56.979098 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:56.979109 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:56.979122 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:56.979129 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:56.979136 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:56.985836 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:56.985850 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:56.985866 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:56.986455 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:56.986472 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:56.986481 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:56.986494 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:56.986511 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:56.986539 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:56.990340 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:56.990358 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:56.990367 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:56.991075 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:56.991088 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:56.991095 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:56.991111 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:56.991145 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:56.991171 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:56.996405 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:56.996426 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:56.996444 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:56.997343 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:56.997411 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:56.997423 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:56.997434 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:56.997455 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:56.997482 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\nI0810 00:29:56.997500 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:56.997509 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:56.997529 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\nI0810 00:29:57.003644 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.003670 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.003695 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.004348 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.004372 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.004391 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:57.004416 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.004435 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.004446 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.009901 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.009922 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.009940 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.010279 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.010301 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.010310 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.010328 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.010340 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.010358 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:57.014989 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.015020 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.015041 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.015678 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.015701 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.015720 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\nI0810 00:29:57.015730 1238 log.go:181] (0xc000abd290) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/I0810 00:29:57.015738 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.015764 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\n\nI0810 00:29:57.015786 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.015798 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.015808 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.021366 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.021390 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.021411 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.022043 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.022069 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.022097 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.022143 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.022176 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:57.022192 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.027226 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.027243 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.027254 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.027853 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.027880 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.027893 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.027910 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.027920 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.027937 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:57.032852 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.032868 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.032884 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.033714 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.033741 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.033749 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:57.033760 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.033766 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.033772 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.038807 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.038832 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.038850 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.039349 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.039368 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.039382 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\nI0810 00:29:57.039399 1238 log.go:181] (0xc000abd290) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I0810 00:29:57.039410 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.039445 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\n http://10.107.222.148:80/\nI0810 00:29:57.039482 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.039505 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.039530 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.045689 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.045706 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.045720 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.046610 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.046624 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.046633 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.046652 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.046666 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.046682 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:57.051340 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.051358 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.051373 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.051813 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.051834 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.051847 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.051856 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.051861 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.051867 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\nI0810 00:29:57.051877 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.051888 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:57.051902 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\nI0810 00:29:57.058680 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.058703 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.058718 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.059350 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.059375 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.059387 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\nI0810 00:29:57.059396 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.059403 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:57.059420 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\nI0810 00:29:57.059429 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.059438 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.059446 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.064272 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.064293 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.064323 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.064930 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.064962 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.065004 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.065032 1238 log.go:181] (0xc000462aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:57.065050 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.065060 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.069602 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.069621 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.069645 1238 log.go:181] (0xc0008270e0) (3) Data frame sent\nI0810 00:29:57.070246 1238 log.go:181] (0xc000abd290) Data frame received for 3\nI0810 00:29:57.070288 1238 log.go:181] (0xc0008270e0) (3) Data frame handling\nI0810 00:29:57.070318 1238 log.go:181] (0xc000abd290) Data frame received for 5\nI0810 00:29:57.070343 1238 log.go:181] (0xc000462aa0) (5) Data frame handling\nI0810 00:29:57.072175 1238 log.go:181] (0xc000abd290) Data frame received for 1\nI0810 00:29:57.072218 1238 log.go:181] (0xc000991a40) (1) Data frame handling\nI0810 00:29:57.072261 1238 log.go:181] (0xc000991a40) (1) Data frame sent\nI0810 00:29:57.072307 1238 log.go:181] (0xc000abd290) (0xc000991a40) Stream removed, broadcasting: 1\nI0810 00:29:57.072350 1238 log.go:181] (0xc000abd290) Go away received\nI0810 00:29:57.072822 1238 log.go:181] (0xc000abd290) (0xc000991a40) Stream removed, broadcasting: 1\nI0810 00:29:57.072852 1238 log.go:181] (0xc000abd290) (0xc0008270e0) Stream removed, broadcasting: 3\nI0810 00:29:57.072868 1238 log.go:181] (0xc000abd290) (0xc000462aa0) Stream removed, broadcasting: 5\n" Aug 10 00:29:57.081: INFO: stdout: "\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck\naffinity-clusterip-timeout-bjbck" Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Received response from host: affinity-clusterip-timeout-bjbck Aug 10 00:29:57.081: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-7037 execpod-affinityvsdws -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.107.222.148:80/' Aug 10 00:29:57.303: INFO: stderr: "I0810 00:29:57.211441 1256 log.go:181] (0xc000f1cfd0) (0xc00096fb80) Create stream\nI0810 00:29:57.211497 1256 log.go:181] (0xc000f1cfd0) (0xc00096fb80) Stream added, broadcasting: 1\nI0810 00:29:57.216499 1256 log.go:181] (0xc000f1cfd0) Reply frame received for 1\nI0810 00:29:57.216561 1256 log.go:181] (0xc000f1cfd0) (0xc000945220) Create stream\nI0810 00:29:57.216579 1256 log.go:181] (0xc000f1cfd0) (0xc000945220) Stream added, broadcasting: 3\nI0810 00:29:57.217587 1256 log.go:181] (0xc000f1cfd0) Reply frame received for 3\nI0810 00:29:57.217629 1256 log.go:181] (0xc000f1cfd0) (0xc0004febe0) Create stream\nI0810 00:29:57.217642 1256 log.go:181] (0xc000f1cfd0) (0xc0004febe0) Stream added, broadcasting: 5\nI0810 00:29:57.218486 1256 log.go:181] (0xc000f1cfd0) Reply frame received for 5\nI0810 00:29:57.291790 1256 log.go:181] (0xc000f1cfd0) Data frame received for 5\nI0810 00:29:57.291820 1256 log.go:181] (0xc0004febe0) (5) Data frame handling\nI0810 00:29:57.291832 1256 log.go:181] (0xc0004febe0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:29:57.295412 1256 log.go:181] (0xc000f1cfd0) Data frame received for 3\nI0810 00:29:57.295437 1256 log.go:181] (0xc000945220) (3) Data frame handling\nI0810 00:29:57.295457 1256 log.go:181] (0xc000945220) (3) Data frame sent\nI0810 00:29:57.295759 1256 log.go:181] (0xc000f1cfd0) Data frame received for 5\nI0810 00:29:57.295779 1256 log.go:181] (0xc0004febe0) (5) Data frame handling\nI0810 00:29:57.296044 1256 log.go:181] (0xc000f1cfd0) Data frame received for 3\nI0810 00:29:57.296064 1256 log.go:181] (0xc000945220) (3) Data frame handling\nI0810 00:29:57.297798 1256 log.go:181] (0xc000f1cfd0) Data frame received for 1\nI0810 00:29:57.297820 1256 log.go:181] (0xc00096fb80) (1) Data frame handling\nI0810 00:29:57.297835 1256 log.go:181] (0xc00096fb80) (1) Data frame sent\nI0810 00:29:57.297844 1256 log.go:181] (0xc000f1cfd0) (0xc00096fb80) Stream removed, broadcasting: 1\nI0810 00:29:57.297852 1256 log.go:181] (0xc000f1cfd0) Go away received\nI0810 00:29:57.298236 1256 log.go:181] (0xc000f1cfd0) (0xc00096fb80) Stream removed, broadcasting: 1\nI0810 00:29:57.298253 1256 log.go:181] (0xc000f1cfd0) (0xc000945220) Stream removed, broadcasting: 3\nI0810 00:29:57.298260 1256 log.go:181] (0xc000f1cfd0) (0xc0004febe0) Stream removed, broadcasting: 5\n" Aug 10 00:29:57.303: INFO: stdout: "affinity-clusterip-timeout-bjbck" Aug 10 00:30:12.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-7037 execpod-affinityvsdws -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.107.222.148:80/' Aug 10 00:30:12.534: INFO: stderr: "I0810 00:30:12.454005 1274 log.go:181] (0xc000d2b6b0) (0xc000d1aaa0) Create stream\nI0810 00:30:12.454073 1274 log.go:181] (0xc000d2b6b0) (0xc000d1aaa0) Stream added, broadcasting: 1\nI0810 00:30:12.456367 1274 log.go:181] (0xc000d2b6b0) Reply frame received for 1\nI0810 00:30:12.456389 1274 log.go:181] (0xc000d2b6b0) (0xc0007ae3c0) Create stream\nI0810 00:30:12.456396 1274 log.go:181] (0xc000d2b6b0) (0xc0007ae3c0) Stream added, broadcasting: 3\nI0810 00:30:12.457462 1274 log.go:181] (0xc000d2b6b0) Reply frame received for 3\nI0810 00:30:12.457535 1274 log.go:181] (0xc000d2b6b0) (0xc000ca2280) Create stream\nI0810 00:30:12.457582 1274 log.go:181] (0xc000d2b6b0) (0xc000ca2280) Stream added, broadcasting: 5\nI0810 00:30:12.458432 1274 log.go:181] (0xc000d2b6b0) Reply frame received for 5\nI0810 00:30:12.523207 1274 log.go:181] (0xc000d2b6b0) Data frame received for 5\nI0810 00:30:12.523270 1274 log.go:181] (0xc000ca2280) (5) Data frame handling\nI0810 00:30:12.523299 1274 log.go:181] (0xc000ca2280) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.107.222.148:80/\nI0810 00:30:12.525075 1274 log.go:181] (0xc000d2b6b0) Data frame received for 3\nI0810 00:30:12.525096 1274 log.go:181] (0xc0007ae3c0) (3) Data frame handling\nI0810 00:30:12.525113 1274 log.go:181] (0xc0007ae3c0) (3) Data frame sent\nI0810 00:30:12.526023 1274 log.go:181] (0xc000d2b6b0) Data frame received for 3\nI0810 00:30:12.526036 1274 log.go:181] (0xc0007ae3c0) (3) Data frame handling\nI0810 00:30:12.526081 1274 log.go:181] (0xc000d2b6b0) Data frame received for 5\nI0810 00:30:12.526093 1274 log.go:181] (0xc000ca2280) (5) Data frame handling\nI0810 00:30:12.527862 1274 log.go:181] (0xc000d2b6b0) Data frame received for 1\nI0810 00:30:12.527891 1274 log.go:181] (0xc000d1aaa0) (1) Data frame handling\nI0810 00:30:12.527912 1274 log.go:181] (0xc000d1aaa0) (1) Data frame sent\nI0810 00:30:12.527932 1274 log.go:181] (0xc000d2b6b0) (0xc000d1aaa0) Stream removed, broadcasting: 1\nI0810 00:30:12.527957 1274 log.go:181] (0xc000d2b6b0) Go away received\nI0810 00:30:12.528228 1274 log.go:181] (0xc000d2b6b0) (0xc000d1aaa0) Stream removed, broadcasting: 1\nI0810 00:30:12.528240 1274 log.go:181] (0xc000d2b6b0) (0xc0007ae3c0) Stream removed, broadcasting: 3\nI0810 00:30:12.528246 1274 log.go:181] (0xc000d2b6b0) (0xc000ca2280) Stream removed, broadcasting: 5\n" Aug 10 00:30:12.534: INFO: stdout: "affinity-clusterip-timeout-kccpc" Aug 10 00:30:12.534: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-7037, will wait for the garbage collector to delete the pods Aug 10 00:30:12.647: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 18.77015ms Aug 10 00:30:13.247: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 600.202025ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:30:24.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7037" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:57.262 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":178,"skipped":2883,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:30:24.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Aug 10 00:30:24.114: INFO: Waiting up to 5m0s for pod "var-expansion-c91492db-d8cc-45ee-a2cf-48cc52f18b64" in namespace "var-expansion-7127" to be "Succeeded or Failed" Aug 10 00:30:24.126: INFO: Pod "var-expansion-c91492db-d8cc-45ee-a2cf-48cc52f18b64": Phase="Pending", Reason="", readiness=false. Elapsed: 11.203021ms Aug 10 00:30:26.130: INFO: Pod "var-expansion-c91492db-d8cc-45ee-a2cf-48cc52f18b64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015073802s Aug 10 00:30:28.134: INFO: Pod "var-expansion-c91492db-d8cc-45ee-a2cf-48cc52f18b64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019631049s STEP: Saw pod success Aug 10 00:30:28.134: INFO: Pod "var-expansion-c91492db-d8cc-45ee-a2cf-48cc52f18b64" satisfied condition "Succeeded or Failed" Aug 10 00:30:28.137: INFO: Trying to get logs from node latest-worker2 pod var-expansion-c91492db-d8cc-45ee-a2cf-48cc52f18b64 container dapi-container: STEP: delete the pod Aug 10 00:30:28.374: INFO: Waiting for pod var-expansion-c91492db-d8cc-45ee-a2cf-48cc52f18b64 to disappear Aug 10 00:30:28.390: INFO: Pod var-expansion-c91492db-d8cc-45ee-a2cf-48cc52f18b64 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:30:28.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7127" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":179,"skipped":2902,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:30:28.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:30:44.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1487" for this suite. • [SLOW TEST:16.412 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":180,"skipped":2910,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:30:44.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:30:44.943: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 10 00:30:47.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9889 create -f -' Aug 10 00:30:51.443: INFO: stderr: "" Aug 10 00:30:51.444: INFO: stdout: "e2e-test-crd-publish-openapi-7285-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 10 00:30:51.444: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9889 delete e2e-test-crd-publish-openapi-7285-crds test-cr' Aug 10 00:30:51.598: INFO: stderr: "" Aug 10 00:30:51.598: INFO: stdout: "e2e-test-crd-publish-openapi-7285-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Aug 10 00:30:51.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9889 apply -f -' Aug 10 00:30:51.906: INFO: stderr: "" Aug 10 00:30:51.906: INFO: stdout: "e2e-test-crd-publish-openapi-7285-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 10 00:30:51.906: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9889 delete e2e-test-crd-publish-openapi-7285-crds test-cr' Aug 10 00:30:52.012: INFO: stderr: "" Aug 10 00:30:52.012: INFO: stdout: "e2e-test-crd-publish-openapi-7285-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 10 00:30:52.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7285-crds' Aug 10 00:30:52.283: INFO: stderr: "" Aug 10 00:30:52.283: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7285-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:30:55.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9889" for this suite. • [SLOW TEST:10.432 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":181,"skipped":2911,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:30:55.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Aug 10 00:30:55.339: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:30:55.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4019" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":182,"skipped":2917,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:30:55.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:30:55.539: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 10 00:30:57.584: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:30:58.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9207" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":183,"skipped":2920,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:30:58.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:31:03.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6053" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":184,"skipped":2948,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:31:03.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:31:34.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6788" for this suite. STEP: Destroying namespace "nsdeletetest-7314" for this suite. Aug 10 00:31:34.673: INFO: Namespace nsdeletetest-7314 was already deleted STEP: Destroying namespace "nsdeletetest-8057" for this suite. • [SLOW TEST:31.278 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":185,"skipped":2973,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:31:34.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-3261e01d-639f-4408-836b-a59cf7fd1520 STEP: Creating a pod to test consume configMaps Aug 10 00:31:34.783: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a82c1156-bb05-4328-8898-d1aad2c7899b" in namespace "projected-3809" to be "Succeeded or Failed" Aug 10 00:31:34.787: INFO: Pod "pod-projected-configmaps-a82c1156-bb05-4328-8898-d1aad2c7899b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.808767ms Aug 10 00:31:36.791: INFO: Pod "pod-projected-configmaps-a82c1156-bb05-4328-8898-d1aad2c7899b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007705761s Aug 10 00:31:38.826: INFO: Pod "pod-projected-configmaps-a82c1156-bb05-4328-8898-d1aad2c7899b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042533822s STEP: Saw pod success Aug 10 00:31:38.826: INFO: Pod "pod-projected-configmaps-a82c1156-bb05-4328-8898-d1aad2c7899b" satisfied condition "Succeeded or Failed" Aug 10 00:31:38.828: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-a82c1156-bb05-4328-8898-d1aad2c7899b container projected-configmap-volume-test: STEP: delete the pod Aug 10 00:31:38.971: INFO: Waiting for pod pod-projected-configmaps-a82c1156-bb05-4328-8898-d1aad2c7899b to disappear Aug 10 00:31:39.015: INFO: Pod pod-projected-configmaps-a82c1156-bb05-4328-8898-d1aad2c7899b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:31:39.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3809" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":186,"skipped":2979,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:31:39.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Aug 10 00:31:39.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f -' Aug 10 00:31:39.651: INFO: stderr: "" Aug 10 00:31:39.651: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Aug 10 00:31:39.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config diff -f -' Aug 10 00:31:40.258: INFO: rc: 1 Aug 10 00:31:40.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete -f -' Aug 10 00:31:40.370: INFO: stderr: "" Aug 10 00:31:40.370: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:31:40.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1921" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":187,"skipped":2981,"failed":0} ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:31:40.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:31:40.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4582" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":188,"skipped":2981,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:31:40.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-72d037ef-2c3e-4b9b-b9d1-7df37637820b STEP: Creating a pod to test consume secrets Aug 10 00:31:40.600: INFO: Waiting up to 5m0s for pod "pod-secrets-0e87f44b-c7dd-4df3-b446-4e784851d802" in namespace "secrets-1204" to be "Succeeded or Failed" Aug 10 00:31:40.640: INFO: Pod "pod-secrets-0e87f44b-c7dd-4df3-b446-4e784851d802": Phase="Pending", Reason="", readiness=false. Elapsed: 40.269585ms Aug 10 00:31:42.644: INFO: Pod "pod-secrets-0e87f44b-c7dd-4df3-b446-4e784851d802": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044589478s Aug 10 00:31:44.656: INFO: Pod "pod-secrets-0e87f44b-c7dd-4df3-b446-4e784851d802": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055960072s Aug 10 00:31:46.660: INFO: Pod "pod-secrets-0e87f44b-c7dd-4df3-b446-4e784851d802": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059878653s STEP: Saw pod success Aug 10 00:31:46.660: INFO: Pod "pod-secrets-0e87f44b-c7dd-4df3-b446-4e784851d802" satisfied condition "Succeeded or Failed" Aug 10 00:31:46.663: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-0e87f44b-c7dd-4df3-b446-4e784851d802 container secret-volume-test: STEP: delete the pod Aug 10 00:31:46.732: INFO: Waiting for pod pod-secrets-0e87f44b-c7dd-4df3-b446-4e784851d802 to disappear Aug 10 00:31:46.801: INFO: Pod pod-secrets-0e87f44b-c7dd-4df3-b446-4e784851d802 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:31:46.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1204" for this suite. • [SLOW TEST:6.296 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":189,"skipped":2987,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:31:46.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 10 00:31:55.075: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 10 00:31:55.125: INFO: Pod pod-with-prestop-exec-hook still exists Aug 10 00:31:57.125: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 10 00:31:57.129: INFO: Pod pod-with-prestop-exec-hook still exists Aug 10 00:31:59.125: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 10 00:31:59.129: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:31:59.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3546" for this suite. • [SLOW TEST:12.332 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":190,"skipped":3005,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:31:59.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:32:00.156: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:32:02.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616320, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616320, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616320, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616320, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:32:05.321: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Aug 10 00:32:09.432: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config attach --namespace=webhook-4116 to-be-attached-pod -i -c=container1' Aug 10 00:32:09.555: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:32:09.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4116" for this suite. STEP: Destroying namespace "webhook-4116-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.534 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":191,"skipped":3019,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:32:09.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 10 00:32:09.808: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9110 /api/v1/namespaces/watch-9110/configmaps/e2e-watch-test-resource-version 4c1b6e8c-7935-4c1e-ab2c-1919bc9b00a5 5788488 0 2020-08-10 00:32:09 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-10 00:32:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 10 00:32:09.808: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9110 /api/v1/namespaces/watch-9110/configmaps/e2e-watch-test-resource-version 4c1b6e8c-7935-4c1e-ab2c-1919bc9b00a5 5788489 0 2020-08-10 00:32:09 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-08-10 00:32:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:32:09.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9110" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":192,"skipped":3036,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:32:09.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 10 00:32:09.919: INFO: Waiting up to 5m0s for pod "downward-api-3f0091c3-5214-4d2f-b628-3c1d0b97ea53" in namespace "downward-api-1786" to be "Succeeded or Failed" Aug 10 00:32:10.033: INFO: Pod "downward-api-3f0091c3-5214-4d2f-b628-3c1d0b97ea53": Phase="Pending", Reason="", readiness=false. Elapsed: 114.137603ms Aug 10 00:32:12.311: INFO: Pod "downward-api-3f0091c3-5214-4d2f-b628-3c1d0b97ea53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391924468s Aug 10 00:32:14.316: INFO: Pod "downward-api-3f0091c3-5214-4d2f-b628-3c1d0b97ea53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.39648312s STEP: Saw pod success Aug 10 00:32:14.316: INFO: Pod "downward-api-3f0091c3-5214-4d2f-b628-3c1d0b97ea53" satisfied condition "Succeeded or Failed" Aug 10 00:32:14.318: INFO: Trying to get logs from node latest-worker pod downward-api-3f0091c3-5214-4d2f-b628-3c1d0b97ea53 container dapi-container: STEP: delete the pod Aug 10 00:32:14.485: INFO: Waiting for pod downward-api-3f0091c3-5214-4d2f-b628-3c1d0b97ea53 to disappear Aug 10 00:32:14.487: INFO: Pod downward-api-3f0091c3-5214-4d2f-b628-3c1d0b97ea53 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:32:14.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1786" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":193,"skipped":3054,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:32:14.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:32:18.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9749" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":194,"skipped":3062,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:32:18.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:32:18.965: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 10 00:32:23.971: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 10 00:32:23.972: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 10 00:32:25.976: INFO: Creating deployment "test-rollover-deployment" Aug 10 00:32:25.995: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 10 00:32:28.018: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 10 00:32:28.025: INFO: Ensure that both replica sets have 1 created replica Aug 10 00:32:28.030: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 10 00:32:28.037: INFO: Updating deployment test-rollover-deployment Aug 10 00:32:28.038: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 10 00:32:30.096: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 10 00:32:30.102: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 10 00:32:30.108: INFO: all replica sets need to contain the pod-template-hash label Aug 10 00:32:30.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616348, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:32:32.115: INFO: all replica sets need to contain the pod-template-hash label Aug 10 00:32:32.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616351, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:32:34.116: INFO: all replica sets need to contain the pod-template-hash label Aug 10 00:32:34.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616351, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:32:36.117: INFO: all replica sets need to contain the pod-template-hash label Aug 10 00:32:36.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616351, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:32:38.117: INFO: all replica sets need to contain the pod-template-hash label Aug 10 00:32:38.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616351, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:32:40.116: INFO: all replica sets need to contain the pod-template-hash label Aug 10 00:32:40.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616351, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616346, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:32:42.115: INFO: Aug 10 00:32:42.115: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 10 00:32:42.123: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-1351 /apis/apps/v1/namespaces/deployment-1351/deployments/test-rollover-deployment 595eb51d-ff09-45e0-bc67-7cd364c23955 5788749 2 2020-08-10 00:32:25 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-10 00:32:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-10 00:32:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036ed9b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-10 00:32:26 +0000 UTC,LastTransitionTime:2020-08-10 00:32:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-08-10 00:32:41 +0000 UTC,LastTransitionTime:2020-08-10 00:32:26 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 10 00:32:42.126: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-1351 /apis/apps/v1/namespaces/deployment-1351/replicasets/test-rollover-deployment-5797c7764 1ef165f3-0b16-4dd9-8090-49d311e042e2 5788738 2 2020-08-10 00:32:28 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 595eb51d-ff09-45e0-bc67-7cd364c23955 0xc0036edf00 0xc0036edf01}] [] [{kube-controller-manager Update apps/v1 2020-08-10 00:32:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"595eb51d-ff09-45e0-bc67-7cd364c23955\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036edf78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 10 00:32:42.126: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 10 00:32:42.126: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-1351 /apis/apps/v1/namespaces/deployment-1351/replicasets/test-rollover-controller 2c1e0b63-b752-4f96-8ab9-17e7a67a5626 5788748 2 2020-08-10 00:32:18 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 595eb51d-ff09-45e0-bc67-7cd364c23955 0xc0036eddf7 0xc0036eddf8}] [] [{e2e.test Update apps/v1 2020-08-10 00:32:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-10 00:32:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"595eb51d-ff09-45e0-bc67-7cd364c23955\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0036ede98 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 10 00:32:42.126: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-1351 /apis/apps/v1/namespaces/deployment-1351/replicasets/test-rollover-deployment-78bc8b888c a57c099d-a08a-4c21-aec1-bcacc138a1a2 5788688 2 2020-08-10 00:32:26 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 595eb51d-ff09-45e0-bc67-7cd364c23955 0xc0036edfe7 0xc0036edfe8}] [] [{kube-controller-manager Update apps/v1 2020-08-10 00:32:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"595eb51d-ff09-45e0-bc67-7cd364c23955\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002828078 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 10 00:32:42.130: INFO: Pod "test-rollover-deployment-5797c7764-f2jw9" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-f2jw9 test-rollover-deployment-5797c7764- deployment-1351 /api/v1/namespaces/deployment-1351/pods/test-rollover-deployment-5797c7764-f2jw9 ccf09cad-4c37-47c6-a104-6e3765e24553 5788704 0 2020-08-10 00:32:28 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 1ef165f3-0b16-4dd9-8090-49d311e042e2 0xc002828630 0xc002828631}] [] [{kube-controller-manager Update v1 2020-08-10 00:32:28 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1ef165f3-0b16-4dd9-8090-49d311e042e2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-10 00:32:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j899v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j899v,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j899v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:32:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:32:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:32:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:32:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.18,StartTime:2020-08-10 00:32:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-10 00:32:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://3758f93a092494daf51012db43b248ae3e91b523e1afea8c52ecebe23f0ac0f3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:32:42.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1351" for this suite. • [SLOW TEST:23.244 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":195,"skipped":3063,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:32:42.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9532 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9532;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9532 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9532;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9532.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9532.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9532.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9532.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9532.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9532.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9532.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9532.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9532.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9532.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9532.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9532.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9532.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 133.68.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.68.133_udp@PTR;check="$$(dig +tcp +noall +answer +search 133.68.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.68.133_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9532 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9532;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9532 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9532;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9532.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9532.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9532.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9532.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9532.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9532.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9532.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9532.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9532.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9532.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9532.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9532.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9532.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 133.68.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.68.133_udp@PTR;check="$$(dig +tcp +noall +answer +search 133.68.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.68.133_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 10 00:32:48.588: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.592: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.595: INFO: Unable to read wheezy_udp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.598: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.601: INFO: Unable to read wheezy_udp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.605: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.608: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.611: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.634: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.637: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.641: INFO: Unable to read jessie_udp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.644: INFO: Unable to read jessie_tcp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.647: INFO: Unable to read jessie_udp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.650: INFO: Unable to read jessie_tcp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.653: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.656: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:48.675: INFO: Lookups using dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9532 wheezy_tcp@dns-test-service.dns-9532 wheezy_udp@dns-test-service.dns-9532.svc wheezy_tcp@dns-test-service.dns-9532.svc wheezy_udp@_http._tcp.dns-test-service.dns-9532.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9532.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9532 jessie_tcp@dns-test-service.dns-9532 jessie_udp@dns-test-service.dns-9532.svc jessie_tcp@dns-test-service.dns-9532.svc jessie_udp@_http._tcp.dns-test-service.dns-9532.svc jessie_tcp@_http._tcp.dns-test-service.dns-9532.svc] Aug 10 00:32:53.680: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.684: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.688: INFO: Unable to read wheezy_udp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.691: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.695: INFO: Unable to read wheezy_udp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.698: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.701: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.704: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.725: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.727: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.730: INFO: Unable to read jessie_udp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.733: INFO: Unable to read jessie_tcp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.736: INFO: Unable to read jessie_udp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.739: INFO: Unable to read jessie_tcp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.742: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.745: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:53.779: INFO: Lookups using dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9532 wheezy_tcp@dns-test-service.dns-9532 wheezy_udp@dns-test-service.dns-9532.svc wheezy_tcp@dns-test-service.dns-9532.svc wheezy_udp@_http._tcp.dns-test-service.dns-9532.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9532.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9532 jessie_tcp@dns-test-service.dns-9532 jessie_udp@dns-test-service.dns-9532.svc jessie_tcp@dns-test-service.dns-9532.svc jessie_udp@_http._tcp.dns-test-service.dns-9532.svc jessie_tcp@_http._tcp.dns-test-service.dns-9532.svc] Aug 10 00:32:58.680: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.684: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.687: INFO: Unable to read wheezy_udp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.691: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.693: INFO: Unable to read wheezy_udp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.696: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.699: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.702: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.725: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.728: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.731: INFO: Unable to read jessie_udp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.734: INFO: Unable to read jessie_tcp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.736: INFO: Unable to read jessie_udp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.739: INFO: Unable to read jessie_tcp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.741: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.744: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:32:58.763: INFO: Lookups using dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9532 wheezy_tcp@dns-test-service.dns-9532 wheezy_udp@dns-test-service.dns-9532.svc wheezy_tcp@dns-test-service.dns-9532.svc wheezy_udp@_http._tcp.dns-test-service.dns-9532.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9532.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9532 jessie_tcp@dns-test-service.dns-9532 jessie_udp@dns-test-service.dns-9532.svc jessie_tcp@dns-test-service.dns-9532.svc jessie_udp@_http._tcp.dns-test-service.dns-9532.svc jessie_tcp@_http._tcp.dns-test-service.dns-9532.svc] Aug 10 00:33:03.683: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.686: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.689: INFO: Unable to read wheezy_udp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.692: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.694: INFO: Unable to read wheezy_udp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.697: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.699: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.701: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.719: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.722: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.724: INFO: Unable to read jessie_udp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.727: INFO: Unable to read jessie_tcp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.730: INFO: Unable to read jessie_udp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.733: INFO: Unable to read jessie_tcp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.735: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.738: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:03.755: INFO: Lookups using dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9532 wheezy_tcp@dns-test-service.dns-9532 wheezy_udp@dns-test-service.dns-9532.svc wheezy_tcp@dns-test-service.dns-9532.svc wheezy_udp@_http._tcp.dns-test-service.dns-9532.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9532.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9532 jessie_tcp@dns-test-service.dns-9532 jessie_udp@dns-test-service.dns-9532.svc jessie_tcp@dns-test-service.dns-9532.svc jessie_udp@_http._tcp.dns-test-service.dns-9532.svc jessie_tcp@_http._tcp.dns-test-service.dns-9532.svc] Aug 10 00:33:08.680: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.683: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.686: INFO: Unable to read wheezy_udp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.689: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.691: INFO: Unable to read wheezy_udp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.694: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.697: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.699: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.715: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.717: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.719: INFO: Unable to read jessie_udp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.721: INFO: Unable to read jessie_tcp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.723: INFO: Unable to read jessie_udp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.729: INFO: Unable to read jessie_tcp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.732: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.734: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:08.749: INFO: Lookups using dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9532 wheezy_tcp@dns-test-service.dns-9532 wheezy_udp@dns-test-service.dns-9532.svc wheezy_tcp@dns-test-service.dns-9532.svc wheezy_udp@_http._tcp.dns-test-service.dns-9532.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9532.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9532 jessie_tcp@dns-test-service.dns-9532 jessie_udp@dns-test-service.dns-9532.svc jessie_tcp@dns-test-service.dns-9532.svc jessie_udp@_http._tcp.dns-test-service.dns-9532.svc jessie_tcp@_http._tcp.dns-test-service.dns-9532.svc] Aug 10 00:33:13.680: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.700: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.704: INFO: Unable to read wheezy_udp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.707: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.710: INFO: Unable to read wheezy_udp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.713: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.715: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.726: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.757: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.760: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.763: INFO: Unable to read jessie_udp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.766: INFO: Unable to read jessie_tcp@dns-test-service.dns-9532 from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.768: INFO: Unable to read jessie_udp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.771: INFO: Unable to read jessie_tcp@dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.773: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.776: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9532.svc from pod dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178: the server could not find the requested resource (get pods dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178) Aug 10 00:33:13.792: INFO: Lookups using dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9532 wheezy_tcp@dns-test-service.dns-9532 wheezy_udp@dns-test-service.dns-9532.svc wheezy_tcp@dns-test-service.dns-9532.svc wheezy_udp@_http._tcp.dns-test-service.dns-9532.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9532.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9532 jessie_tcp@dns-test-service.dns-9532 jessie_udp@dns-test-service.dns-9532.svc jessie_tcp@dns-test-service.dns-9532.svc jessie_udp@_http._tcp.dns-test-service.dns-9532.svc jessie_tcp@_http._tcp.dns-test-service.dns-9532.svc] Aug 10 00:33:18.777: INFO: DNS probes using dns-9532/dns-test-ca2cd8ef-3e33-4a81-ad02-021693acd178 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:33:19.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9532" for this suite. • [SLOW TEST:37.571 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":196,"skipped":3116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:33:19.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 10 00:33:20.374: INFO: Pod name wrapped-volume-race-88106103-7459-47b3-bd9b-5bb32fec582b: Found 0 pods out of 5 Aug 10 00:33:25.384: INFO: Pod name wrapped-volume-race-88106103-7459-47b3-bd9b-5bb32fec582b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-88106103-7459-47b3-bd9b-5bb32fec582b in namespace emptydir-wrapper-7730, will wait for the garbage collector to delete the pods Aug 10 00:33:37.503: INFO: Deleting ReplicationController wrapped-volume-race-88106103-7459-47b3-bd9b-5bb32fec582b took: 24.794408ms Aug 10 00:33:38.003: INFO: Terminating ReplicationController wrapped-volume-race-88106103-7459-47b3-bd9b-5bb32fec582b pods took: 500.22939ms STEP: Creating RC which spawns configmap-volume pods Aug 10 00:33:53.950: INFO: Pod name wrapped-volume-race-86eb04b6-3d58-40d1-8418-47d2e575521c: Found 0 pods out of 5 Aug 10 00:33:58.959: INFO: Pod name wrapped-volume-race-86eb04b6-3d58-40d1-8418-47d2e575521c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-86eb04b6-3d58-40d1-8418-47d2e575521c in namespace emptydir-wrapper-7730, will wait for the garbage collector to delete the pods Aug 10 00:34:15.041: INFO: Deleting ReplicationController wrapped-volume-race-86eb04b6-3d58-40d1-8418-47d2e575521c took: 6.08483ms Aug 10 00:34:15.541: INFO: Terminating ReplicationController wrapped-volume-race-86eb04b6-3d58-40d1-8418-47d2e575521c pods took: 500.243976ms STEP: Creating RC which spawns configmap-volume pods Aug 10 00:34:24.188: INFO: Pod name wrapped-volume-race-bc916182-d81e-4015-9415-031de49164c5: Found 0 pods out of 5 Aug 10 00:34:29.197: INFO: Pod name wrapped-volume-race-bc916182-d81e-4015-9415-031de49164c5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-bc916182-d81e-4015-9415-031de49164c5 in namespace emptydir-wrapper-7730, will wait for the garbage collector to delete the pods Aug 10 00:34:43.413: INFO: Deleting ReplicationController wrapped-volume-race-bc916182-d81e-4015-9415-031de49164c5 took: 61.88229ms Aug 10 00:34:43.813: INFO: Terminating ReplicationController wrapped-volume-race-bc916182-d81e-4015-9415-031de49164c5 pods took: 400.19861ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:34:54.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7730" for this suite. • [SLOW TEST:94.481 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":197,"skipped":3145,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:34:54.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2227 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 10 00:34:54.526: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 10 00:34:54.601: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 00:34:56.605: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 00:34:58.606: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:35:00.612: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:35:02.606: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:35:04.606: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:35:06.606: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:35:08.606: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:35:10.606: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:35:12.624: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:35:14.605: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:35:16.604: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 10 00:35:16.610: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 10 00:35:18.613: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 10 00:35:22.635: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.31:8080/dial?request=hostname&protocol=http&host=10.244.1.91&port=8080&tries=1'] Namespace:pod-network-test-2227 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:35:22.635: INFO: >>> kubeConfig: /root/.kube/config I0810 00:35:22.666252 8 log.go:181] (0xc0012b48f0) (0xc00264c500) Create stream I0810 00:35:22.666293 8 log.go:181] (0xc0012b48f0) (0xc00264c500) Stream added, broadcasting: 1 I0810 00:35:22.668132 8 log.go:181] (0xc0012b48f0) Reply frame received for 1 I0810 00:35:22.668176 8 log.go:181] (0xc0012b48f0) (0xc001a35e00) Create stream I0810 00:35:22.668189 8 log.go:181] (0xc0012b48f0) (0xc001a35e00) Stream added, broadcasting: 3 I0810 00:35:22.669267 8 log.go:181] (0xc0012b48f0) Reply frame received for 3 I0810 00:35:22.669301 8 log.go:181] (0xc0012b48f0) (0xc001a35ea0) Create stream I0810 00:35:22.669313 8 log.go:181] (0xc0012b48f0) (0xc001a35ea0) Stream added, broadcasting: 5 I0810 00:35:22.670221 8 log.go:181] (0xc0012b48f0) Reply frame received for 5 I0810 00:35:22.753365 8 log.go:181] (0xc0012b48f0) Data frame received for 3 I0810 00:35:22.753411 8 log.go:181] (0xc001a35e00) (3) Data frame handling I0810 00:35:22.753452 8 log.go:181] (0xc001a35e00) (3) Data frame sent I0810 00:35:22.753584 8 log.go:181] (0xc0012b48f0) Data frame received for 3 I0810 00:35:22.753616 8 log.go:181] (0xc001a35e00) (3) Data frame handling I0810 00:35:22.753724 8 log.go:181] (0xc0012b48f0) Data frame received for 5 I0810 00:35:22.753751 8 log.go:181] (0xc001a35ea0) (5) Data frame handling I0810 00:35:22.755415 8 log.go:181] (0xc0012b48f0) Data frame received for 1 I0810 00:35:22.755434 8 log.go:181] (0xc00264c500) (1) Data frame handling I0810 00:35:22.755443 8 log.go:181] (0xc00264c500) (1) Data frame sent I0810 00:35:22.755452 8 log.go:181] (0xc0012b48f0) (0xc00264c500) Stream removed, broadcasting: 1 I0810 00:35:22.755532 8 log.go:181] (0xc0012b48f0) (0xc00264c500) Stream removed, broadcasting: 1 I0810 00:35:22.755549 8 log.go:181] (0xc0012b48f0) (0xc001a35e00) Stream removed, broadcasting: 3 I0810 00:35:22.755559 8 log.go:181] (0xc0012b48f0) (0xc001a35ea0) Stream removed, broadcasting: 5 Aug 10 00:35:22.755: INFO: Waiting for responses: map[] I0810 00:35:22.755856 8 log.go:181] (0xc0012b48f0) Go away received Aug 10 00:35:22.759: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.31:8080/dial?request=hostname&protocol=http&host=10.244.2.30&port=8080&tries=1'] Namespace:pod-network-test-2227 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:35:22.759: INFO: >>> kubeConfig: /root/.kube/config I0810 00:35:22.797661 8 log.go:181] (0xc000e1e580) (0xc0035399a0) Create stream I0810 00:35:22.797695 8 log.go:181] (0xc000e1e580) (0xc0035399a0) Stream added, broadcasting: 1 I0810 00:35:22.802059 8 log.go:181] (0xc000e1e580) Reply frame received for 1 I0810 00:35:22.802143 8 log.go:181] (0xc000e1e580) (0xc002dce820) Create stream I0810 00:35:22.802173 8 log.go:181] (0xc000e1e580) (0xc002dce820) Stream added, broadcasting: 3 I0810 00:35:22.803569 8 log.go:181] (0xc000e1e580) Reply frame received for 3 I0810 00:35:22.803611 8 log.go:181] (0xc000e1e580) (0xc00264c5a0) Create stream I0810 00:35:22.803630 8 log.go:181] (0xc000e1e580) (0xc00264c5a0) Stream added, broadcasting: 5 I0810 00:35:22.804509 8 log.go:181] (0xc000e1e580) Reply frame received for 5 I0810 00:35:22.870769 8 log.go:181] (0xc000e1e580) Data frame received for 3 I0810 00:35:22.870847 8 log.go:181] (0xc002dce820) (3) Data frame handling I0810 00:35:22.870886 8 log.go:181] (0xc002dce820) (3) Data frame sent I0810 00:35:22.871094 8 log.go:181] (0xc000e1e580) Data frame received for 5 I0810 00:35:22.871112 8 log.go:181] (0xc00264c5a0) (5) Data frame handling I0810 00:35:22.871531 8 log.go:181] (0xc000e1e580) Data frame received for 3 I0810 00:35:22.871573 8 log.go:181] (0xc002dce820) (3) Data frame handling I0810 00:35:22.873480 8 log.go:181] (0xc000e1e580) Data frame received for 1 I0810 00:35:22.873493 8 log.go:181] (0xc0035399a0) (1) Data frame handling I0810 00:35:22.873500 8 log.go:181] (0xc0035399a0) (1) Data frame sent I0810 00:35:22.873507 8 log.go:181] (0xc000e1e580) (0xc0035399a0) Stream removed, broadcasting: 1 I0810 00:35:22.873576 8 log.go:181] (0xc000e1e580) (0xc0035399a0) Stream removed, broadcasting: 1 I0810 00:35:22.873589 8 log.go:181] (0xc000e1e580) (0xc002dce820) Stream removed, broadcasting: 3 I0810 00:35:22.873677 8 log.go:181] (0xc000e1e580) (0xc00264c5a0) Stream removed, broadcasting: 5 I0810 00:35:22.873695 8 log.go:181] (0xc000e1e580) Go away received Aug 10 00:35:22.873: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:35:22.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2227" for this suite. • [SLOW TEST:28.696 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":198,"skipped":3161,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:35:22.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 10 00:35:33.091: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 10 00:35:33.140: INFO: Pod pod-with-prestop-http-hook still exists Aug 10 00:35:35.140: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 10 00:35:35.145: INFO: Pod pod-with-prestop-http-hook still exists Aug 10 00:35:37.140: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 10 00:35:37.144: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:35:37.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9334" for this suite. • [SLOW TEST:14.285 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":199,"skipped":3178,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:35:37.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 00:35:37.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3906abe1-d608-4b12-8e2d-c440bbb8219c" in namespace "projected-2981" to be "Succeeded or Failed" Aug 10 00:35:37.241: INFO: Pod "downwardapi-volume-3906abe1-d608-4b12-8e2d-c440bbb8219c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045486ms Aug 10 00:35:39.245: INFO: Pod "downwardapi-volume-3906abe1-d608-4b12-8e2d-c440bbb8219c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008322596s Aug 10 00:35:41.250: INFO: Pod "downwardapi-volume-3906abe1-d608-4b12-8e2d-c440bbb8219c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013159962s STEP: Saw pod success Aug 10 00:35:41.250: INFO: Pod "downwardapi-volume-3906abe1-d608-4b12-8e2d-c440bbb8219c" satisfied condition "Succeeded or Failed" Aug 10 00:35:41.253: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-3906abe1-d608-4b12-8e2d-c440bbb8219c container client-container: STEP: delete the pod Aug 10 00:35:41.294: INFO: Waiting for pod downwardapi-volume-3906abe1-d608-4b12-8e2d-c440bbb8219c to disappear Aug 10 00:35:41.311: INFO: Pod downwardapi-volume-3906abe1-d608-4b12-8e2d-c440bbb8219c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:35:41.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2981" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":200,"skipped":3190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:35:41.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:35:41.451: INFO: Create a RollingUpdate DaemonSet Aug 10 00:35:41.456: INFO: Check that daemon pods launch on every node of the cluster Aug 10 00:35:41.487: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:35:41.499: INFO: Number of nodes with available pods: 0 Aug 10 00:35:41.499: INFO: Node latest-worker is running more than one daemon pod Aug 10 00:35:42.503: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:35:42.506: INFO: Number of nodes with available pods: 0 Aug 10 00:35:42.506: INFO: Node latest-worker is running more than one daemon pod Aug 10 00:35:43.936: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:35:43.939: INFO: Number of nodes with available pods: 0 Aug 10 00:35:43.939: INFO: Node latest-worker is running more than one daemon pod Aug 10 00:35:44.627: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:35:44.739: INFO: Number of nodes with available pods: 0 Aug 10 00:35:44.739: INFO: Node latest-worker is running more than one daemon pod Aug 10 00:35:45.578: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:35:45.611: INFO: Number of nodes with available pods: 0 Aug 10 00:35:45.611: INFO: Node latest-worker is running more than one daemon pod Aug 10 00:35:46.506: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:35:46.511: INFO: Number of nodes with available pods: 2 Aug 10 00:35:46.511: INFO: Number of running nodes: 2, number of available pods: 2 Aug 10 00:35:46.511: INFO: Update the DaemonSet to trigger a rollout Aug 10 00:35:46.518: INFO: Updating DaemonSet daemon-set Aug 10 00:35:54.619: INFO: Roll back the DaemonSet before rollout is complete Aug 10 00:35:54.625: INFO: Updating DaemonSet daemon-set Aug 10 00:35:54.625: INFO: Make sure DaemonSet rollback is complete Aug 10 00:35:54.679: INFO: Wrong image for pod: daemon-set-m2q5j. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 10 00:35:54.679: INFO: Pod daemon-set-m2q5j is not available Aug 10 00:35:54.683: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:35:55.688: INFO: Wrong image for pod: daemon-set-m2q5j. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Aug 10 00:35:55.688: INFO: Pod daemon-set-m2q5j is not available Aug 10 00:35:55.695: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 10 00:35:56.688: INFO: Pod daemon-set-82zgf is not available Aug 10 00:35:56.692: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4023, will wait for the garbage collector to delete the pods Aug 10 00:35:56.757: INFO: Deleting DaemonSet.extensions daemon-set took: 6.125934ms Aug 10 00:35:57.258: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.25031ms Aug 10 00:36:03.961: INFO: Number of nodes with available pods: 0 Aug 10 00:36:03.961: INFO: Number of running nodes: 0, number of available pods: 0 Aug 10 00:36:03.990: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4023/daemonsets","resourceVersion":"5790473"},"items":null} Aug 10 00:36:03.992: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4023/pods","resourceVersion":"5790473"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:36:04.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4023" for this suite. • [SLOW TEST:22.679 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":201,"skipped":3214,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:36:04.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-2e40e5ce-30b3-43aa-97a6-d32458dc7d70 STEP: Creating a pod to test consume secrets Aug 10 00:36:04.117: INFO: Waiting up to 5m0s for pod "pod-secrets-507ad29f-c7a2-4c39-8cca-2ecf08807997" in namespace "secrets-1843" to be "Succeeded or Failed" Aug 10 00:36:04.122: INFO: Pod "pod-secrets-507ad29f-c7a2-4c39-8cca-2ecf08807997": Phase="Pending", Reason="", readiness=false. Elapsed: 4.592382ms Aug 10 00:36:06.127: INFO: Pod "pod-secrets-507ad29f-c7a2-4c39-8cca-2ecf08807997": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009216233s Aug 10 00:36:08.131: INFO: Pod "pod-secrets-507ad29f-c7a2-4c39-8cca-2ecf08807997": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013781275s STEP: Saw pod success Aug 10 00:36:08.131: INFO: Pod "pod-secrets-507ad29f-c7a2-4c39-8cca-2ecf08807997" satisfied condition "Succeeded or Failed" Aug 10 00:36:08.134: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-507ad29f-c7a2-4c39-8cca-2ecf08807997 container secret-volume-test: STEP: delete the pod Aug 10 00:36:08.190: INFO: Waiting for pod pod-secrets-507ad29f-c7a2-4c39-8cca-2ecf08807997 to disappear Aug 10 00:36:08.200: INFO: Pod pod-secrets-507ad29f-c7a2-4c39-8cca-2ecf08807997 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:36:08.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1843" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":202,"skipped":3233,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:36:08.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Aug 10 00:36:08.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-1977 -- logs-generator --log-lines-total 100 --run-duration 20s' Aug 10 00:36:08.420: INFO: stderr: "" Aug 10 00:36:08.420: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Aug 10 00:36:08.420: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Aug 10 00:36:08.420: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1977" to be "running and ready, or succeeded" Aug 10 00:36:08.447: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 26.801494ms Aug 10 00:36:10.450: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030363904s Aug 10 00:36:12.454: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.03457917s Aug 10 00:36:12.454: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Aug 10 00:36:12.454: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Aug 10 00:36:12.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1977' Aug 10 00:36:12.562: INFO: stderr: "" Aug 10 00:36:12.562: INFO: stdout: "I0810 00:36:11.082613 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/hvg 311\nI0810 00:36:11.282763 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/vd2 555\nI0810 00:36:11.482814 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/2jb 356\nI0810 00:36:11.682768 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/7w49 508\nI0810 00:36:11.882792 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/j9c2 557\nI0810 00:36:12.082795 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/nmc5 270\nI0810 00:36:12.282775 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/56d4 295\nI0810 00:36:12.482761 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/dfqm 274\n" STEP: limiting log lines Aug 10 00:36:12.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1977 --tail=1' Aug 10 00:36:12.677: INFO: stderr: "" Aug 10 00:36:12.677: INFO: stdout: "I0810 00:36:12.482761 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/dfqm 274\n" Aug 10 00:36:12.677: INFO: got output "I0810 00:36:12.482761 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/dfqm 274\n" STEP: limiting log bytes Aug 10 00:36:12.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1977 --limit-bytes=1' Aug 10 00:36:12.790: INFO: stderr: "" Aug 10 00:36:12.790: INFO: stdout: "I" Aug 10 00:36:12.790: INFO: got output "I" STEP: exposing timestamps Aug 10 00:36:12.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1977 --tail=1 --timestamps' Aug 10 00:36:12.911: INFO: stderr: "" Aug 10 00:36:12.911: INFO: stdout: "2020-08-10T00:36:12.882889335Z I0810 00:36:12.882723 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/4jz8 401\n" Aug 10 00:36:12.911: INFO: got output "2020-08-10T00:36:12.882889335Z I0810 00:36:12.882723 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/4jz8 401\n" STEP: restricting to a time range Aug 10 00:36:15.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1977 --since=1s' Aug 10 00:36:15.535: INFO: stderr: "" Aug 10 00:36:15.535: INFO: stdout: "I0810 00:36:14.682806 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/4xsc 447\nI0810 00:36:14.882797 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/bmv 451\nI0810 00:36:15.082787 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/s6l4 320\nI0810 00:36:15.282758 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/xkj 573\nI0810 00:36:15.482775 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/5jz 384\n" Aug 10 00:36:15.536: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1977 --since=24h' Aug 10 00:36:15.645: INFO: stderr: "" Aug 10 00:36:15.645: INFO: stdout: "I0810 00:36:11.082613 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/hvg 311\nI0810 00:36:11.282763 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/vd2 555\nI0810 00:36:11.482814 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/2jb 356\nI0810 00:36:11.682768 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/7w49 508\nI0810 00:36:11.882792 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/j9c2 557\nI0810 00:36:12.082795 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/nmc5 270\nI0810 00:36:12.282775 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/56d4 295\nI0810 00:36:12.482761 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/dfqm 274\nI0810 00:36:12.682821 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/5gt 384\nI0810 00:36:12.882723 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/4jz8 401\nI0810 00:36:13.082787 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/jjvr 295\nI0810 00:36:13.282797 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/625g 238\nI0810 00:36:13.482776 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/z4sf 553\nI0810 00:36:13.682781 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/869 536\nI0810 00:36:13.882756 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/7d7 295\nI0810 00:36:14.082743 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/mrvl 355\nI0810 00:36:14.282766 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/7ww8 241\nI0810 00:36:14.482782 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/vkz 257\nI0810 00:36:14.682806 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/4xsc 447\nI0810 00:36:14.882797 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/bmv 451\nI0810 00:36:15.082787 1 logs_generator.go:76] 20 POST /api/v1/namespaces/kube-system/pods/s6l4 320\nI0810 00:36:15.282758 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/xkj 573\nI0810 00:36:15.482775 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/5jz 384\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Aug 10 00:36:15.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1977' Aug 10 00:36:18.421: INFO: stderr: "" Aug 10 00:36:18.421: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:36:18.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1977" for this suite. • [SLOW TEST:10.221 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":203,"skipped":3235,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:36:18.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-7df54eb9-3567-41d8-9e1b-c365b4972a34 STEP: Creating a pod to test consume secrets Aug 10 00:36:18.499: INFO: Waiting up to 5m0s for pod "pod-secrets-3adbf802-4d94-4959-a436-6ddab6e1961e" in namespace "secrets-1132" to be "Succeeded or Failed" Aug 10 00:36:18.534: INFO: Pod "pod-secrets-3adbf802-4d94-4959-a436-6ddab6e1961e": Phase="Pending", Reason="", readiness=false. Elapsed: 35.123167ms Aug 10 00:36:20.538: INFO: Pod "pod-secrets-3adbf802-4d94-4959-a436-6ddab6e1961e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039132474s Aug 10 00:36:22.543: INFO: Pod "pod-secrets-3adbf802-4d94-4959-a436-6ddab6e1961e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044100859s STEP: Saw pod success Aug 10 00:36:22.543: INFO: Pod "pod-secrets-3adbf802-4d94-4959-a436-6ddab6e1961e" satisfied condition "Succeeded or Failed" Aug 10 00:36:22.546: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-3adbf802-4d94-4959-a436-6ddab6e1961e container secret-volume-test: STEP: delete the pod Aug 10 00:36:22.583: INFO: Waiting for pod pod-secrets-3adbf802-4d94-4959-a436-6ddab6e1961e to disappear Aug 10 00:36:22.606: INFO: Pod pod-secrets-3adbf802-4d94-4959-a436-6ddab6e1961e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:36:22.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1132" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":204,"skipped":3247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:36:22.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-f9d3751d-c520-452f-be51-0ad431ee9fee STEP: Creating a pod to test consume secrets Aug 10 00:36:22.675: INFO: Waiting up to 5m0s for pod "pod-secrets-2ac418ea-7c0d-4b80-b3da-d2ae7c2893af" in namespace "secrets-3359" to be "Succeeded or Failed" Aug 10 00:36:22.721: INFO: Pod "pod-secrets-2ac418ea-7c0d-4b80-b3da-d2ae7c2893af": Phase="Pending", Reason="", readiness=false. Elapsed: 45.366272ms Aug 10 00:36:24.725: INFO: Pod "pod-secrets-2ac418ea-7c0d-4b80-b3da-d2ae7c2893af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049575615s Aug 10 00:36:26.728: INFO: Pod "pod-secrets-2ac418ea-7c0d-4b80-b3da-d2ae7c2893af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053035528s STEP: Saw pod success Aug 10 00:36:26.728: INFO: Pod "pod-secrets-2ac418ea-7c0d-4b80-b3da-d2ae7c2893af" satisfied condition "Succeeded or Failed" Aug 10 00:36:26.731: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-2ac418ea-7c0d-4b80-b3da-d2ae7c2893af container secret-volume-test: STEP: delete the pod Aug 10 00:36:26.799: INFO: Waiting for pod pod-secrets-2ac418ea-7c0d-4b80-b3da-d2ae7c2893af to disappear Aug 10 00:36:26.913: INFO: Pod pod-secrets-2ac418ea-7c0d-4b80-b3da-d2ae7c2893af no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:36:26.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3359" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":205,"skipped":3284,"failed":0} SSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:36:26.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9692 Aug 10 00:36:31.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9692 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Aug 10 00:36:31.302: INFO: stderr: "I0810 00:36:31.193684 1617 log.go:181] (0xc0009513f0) (0xc000988320) Create stream\nI0810 00:36:31.193727 1617 log.go:181] (0xc0009513f0) (0xc000988320) Stream added, broadcasting: 1\nI0810 00:36:31.197529 1617 log.go:181] (0xc0009513f0) Reply frame received for 1\nI0810 00:36:31.197571 1617 log.go:181] (0xc0009513f0) (0xc000924b40) Create stream\nI0810 00:36:31.197583 1617 log.go:181] (0xc0009513f0) (0xc000924b40) Stream added, broadcasting: 3\nI0810 00:36:31.198194 1617 log.go:181] (0xc0009513f0) Reply frame received for 3\nI0810 00:36:31.198217 1617 log.go:181] (0xc0009513f0) (0xc00062e3c0) Create stream\nI0810 00:36:31.198224 1617 log.go:181] (0xc0009513f0) (0xc00062e3c0) Stream added, broadcasting: 5\nI0810 00:36:31.198815 1617 log.go:181] (0xc0009513f0) Reply frame received for 5\nI0810 00:36:31.287753 1617 log.go:181] (0xc0009513f0) Data frame received for 5\nI0810 00:36:31.287777 1617 log.go:181] (0xc00062e3c0) (5) Data frame handling\nI0810 00:36:31.287786 1617 log.go:181] (0xc00062e3c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI0810 00:36:31.294252 1617 log.go:181] (0xc0009513f0) Data frame received for 3\nI0810 00:36:31.294292 1617 log.go:181] (0xc000924b40) (3) Data frame handling\nI0810 00:36:31.294315 1617 log.go:181] (0xc000924b40) (3) Data frame sent\nI0810 00:36:31.294917 1617 log.go:181] (0xc0009513f0) Data frame received for 3\nI0810 00:36:31.294943 1617 log.go:181] (0xc000924b40) (3) Data frame handling\nI0810 00:36:31.294985 1617 log.go:181] (0xc0009513f0) Data frame received for 5\nI0810 00:36:31.295019 1617 log.go:181] (0xc00062e3c0) (5) Data frame handling\nI0810 00:36:31.296941 1617 log.go:181] (0xc0009513f0) Data frame received for 1\nI0810 00:36:31.296969 1617 log.go:181] (0xc000988320) (1) Data frame handling\nI0810 00:36:31.296994 1617 log.go:181] (0xc000988320) (1) Data frame sent\nI0810 00:36:31.297016 1617 log.go:181] (0xc0009513f0) (0xc000988320) Stream removed, broadcasting: 1\nI0810 00:36:31.297111 1617 log.go:181] (0xc0009513f0) Go away received\nI0810 00:36:31.297485 1617 log.go:181] (0xc0009513f0) (0xc000988320) Stream removed, broadcasting: 1\nI0810 00:36:31.297509 1617 log.go:181] (0xc0009513f0) (0xc000924b40) Stream removed, broadcasting: 3\nI0810 00:36:31.297518 1617 log.go:181] (0xc0009513f0) (0xc00062e3c0) Stream removed, broadcasting: 5\n" Aug 10 00:36:31.302: INFO: stdout: "iptables" Aug 10 00:36:31.302: INFO: proxyMode: iptables Aug 10 00:36:31.308: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:36:31.315: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:36:33.315: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:36:33.332: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:36:35.315: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:36:35.319: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:36:37.315: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:36:37.320: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:36:39.315: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:36:39.320: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:36:41.315: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:36:41.320: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:36:43.315: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:36:43.320: INFO: Pod kube-proxy-mode-detector still exists Aug 10 00:36:45.315: INFO: Waiting for pod kube-proxy-mode-detector to disappear Aug 10 00:36:45.319: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-9692 STEP: creating replication controller affinity-nodeport-timeout in namespace services-9692 I0810 00:36:45.413779 8 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-9692, replica count: 3 I0810 00:36:48.464144 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 00:36:51.464359 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 00:36:54.464680 8 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 10 00:36:54.474: INFO: Creating new exec pod Aug 10 00:36:59.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9692 execpod-affinitylpzw2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Aug 10 00:36:59.734: INFO: stderr: "I0810 00:36:59.633315 1635 log.go:181] (0xc000d4cfd0) (0xc000e92820) Create stream\nI0810 00:36:59.633380 1635 log.go:181] (0xc000d4cfd0) (0xc000e92820) Stream added, broadcasting: 1\nI0810 00:36:59.638015 1635 log.go:181] (0xc000d4cfd0) Reply frame received for 1\nI0810 00:36:59.638053 1635 log.go:181] (0xc000d4cfd0) (0xc000a89220) Create stream\nI0810 00:36:59.638064 1635 log.go:181] (0xc000d4cfd0) (0xc000a89220) Stream added, broadcasting: 3\nI0810 00:36:59.638942 1635 log.go:181] (0xc000d4cfd0) Reply frame received for 3\nI0810 00:36:59.638970 1635 log.go:181] (0xc000d4cfd0) (0xc000819360) Create stream\nI0810 00:36:59.638979 1635 log.go:181] (0xc000d4cfd0) (0xc000819360) Stream added, broadcasting: 5\nI0810 00:36:59.640041 1635 log.go:181] (0xc000d4cfd0) Reply frame received for 5\nI0810 00:36:59.724900 1635 log.go:181] (0xc000d4cfd0) Data frame received for 5\nI0810 00:36:59.724942 1635 log.go:181] (0xc000819360) (5) Data frame handling\nI0810 00:36:59.724981 1635 log.go:181] (0xc000819360) (5) Data frame sent\nI0810 00:36:59.724999 1635 log.go:181] (0xc000d4cfd0) Data frame received for 5\nI0810 00:36:59.725015 1635 log.go:181] (0xc000819360) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI0810 00:36:59.725119 1635 log.go:181] (0xc000819360) (5) Data frame sent\nI0810 00:36:59.725168 1635 log.go:181] (0xc000d4cfd0) Data frame received for 5\nI0810 00:36:59.725181 1635 log.go:181] (0xc000819360) (5) Data frame handling\nI0810 00:36:59.725230 1635 log.go:181] (0xc000d4cfd0) Data frame received for 3\nI0810 00:36:59.725263 1635 log.go:181] (0xc000a89220) (3) Data frame handling\nI0810 00:36:59.727193 1635 log.go:181] (0xc000d4cfd0) Data frame received for 1\nI0810 00:36:59.727221 1635 log.go:181] (0xc000e92820) (1) Data frame handling\nI0810 00:36:59.727240 1635 log.go:181] (0xc000e92820) (1) Data frame sent\nI0810 00:36:59.727261 1635 log.go:181] (0xc000d4cfd0) (0xc000e92820) Stream removed, broadcasting: 1\nI0810 00:36:59.727361 1635 log.go:181] (0xc000d4cfd0) Go away received\nI0810 00:36:59.727774 1635 log.go:181] (0xc000d4cfd0) (0xc000e92820) Stream removed, broadcasting: 1\nI0810 00:36:59.727795 1635 log.go:181] (0xc000d4cfd0) (0xc000a89220) Stream removed, broadcasting: 3\nI0810 00:36:59.727806 1635 log.go:181] (0xc000d4cfd0) (0xc000819360) Stream removed, broadcasting: 5\n" Aug 10 00:36:59.734: INFO: stdout: "" Aug 10 00:36:59.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9692 execpod-affinitylpzw2 -- /bin/sh -x -c nc -zv -t -w 2 10.100.206.109 80' Aug 10 00:36:59.924: INFO: stderr: "I0810 00:36:59.860305 1653 log.go:181] (0xc00003a2c0) (0xc0009a4320) Create stream\nI0810 00:36:59.860353 1653 log.go:181] (0xc00003a2c0) (0xc0009a4320) Stream added, broadcasting: 1\nI0810 00:36:59.863193 1653 log.go:181] (0xc00003a2c0) Reply frame received for 1\nI0810 00:36:59.863241 1653 log.go:181] (0xc00003a2c0) (0xc000a9b040) Create stream\nI0810 00:36:59.863257 1653 log.go:181] (0xc00003a2c0) (0xc000a9b040) Stream added, broadcasting: 3\nI0810 00:36:59.864162 1653 log.go:181] (0xc00003a2c0) Reply frame received for 3\nI0810 00:36:59.864205 1653 log.go:181] (0xc00003a2c0) (0xc000784a00) Create stream\nI0810 00:36:59.864223 1653 log.go:181] (0xc00003a2c0) (0xc000784a00) Stream added, broadcasting: 5\nI0810 00:36:59.865303 1653 log.go:181] (0xc00003a2c0) Reply frame received for 5\nI0810 00:36:59.916143 1653 log.go:181] (0xc00003a2c0) Data frame received for 3\nI0810 00:36:59.916181 1653 log.go:181] (0xc000a9b040) (3) Data frame handling\nI0810 00:36:59.916202 1653 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0810 00:36:59.916211 1653 log.go:181] (0xc000784a00) (5) Data frame handling\nI0810 00:36:59.916227 1653 log.go:181] (0xc000784a00) (5) Data frame sent\nI0810 00:36:59.916240 1653 log.go:181] (0xc00003a2c0) Data frame received for 5\nI0810 00:36:59.916249 1653 log.go:181] (0xc000784a00) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.206.109 80\nConnection to 10.100.206.109 80 port [tcp/http] succeeded!\nI0810 00:36:59.917945 1653 log.go:181] (0xc00003a2c0) Data frame received for 1\nI0810 00:36:59.917985 1653 log.go:181] (0xc0009a4320) (1) Data frame handling\nI0810 00:36:59.918015 1653 log.go:181] (0xc0009a4320) (1) Data frame sent\nI0810 00:36:59.918054 1653 log.go:181] (0xc00003a2c0) (0xc0009a4320) Stream removed, broadcasting: 1\nI0810 00:36:59.918117 1653 log.go:181] (0xc00003a2c0) Go away received\nI0810 00:36:59.918608 1653 log.go:181] (0xc00003a2c0) (0xc0009a4320) Stream removed, broadcasting: 1\nI0810 00:36:59.918629 1653 log.go:181] (0xc00003a2c0) (0xc000a9b040) Stream removed, broadcasting: 3\nI0810 00:36:59.918641 1653 log.go:181] (0xc00003a2c0) (0xc000784a00) Stream removed, broadcasting: 5\n" Aug 10 00:36:59.924: INFO: stdout: "" Aug 10 00:36:59.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9692 execpod-affinitylpzw2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31438' Aug 10 00:37:00.128: INFO: stderr: "I0810 00:37:00.059480 1671 log.go:181] (0xc00018c4d0) (0xc000b479a0) Create stream\nI0810 00:37:00.059531 1671 log.go:181] (0xc00018c4d0) (0xc000b479a0) Stream added, broadcasting: 1\nI0810 00:37:00.061242 1671 log.go:181] (0xc00018c4d0) Reply frame received for 1\nI0810 00:37:00.061270 1671 log.go:181] (0xc00018c4d0) (0xc000b3d0e0) Create stream\nI0810 00:37:00.061280 1671 log.go:181] (0xc00018c4d0) (0xc000b3d0e0) Stream added, broadcasting: 3\nI0810 00:37:00.062342 1671 log.go:181] (0xc00018c4d0) Reply frame received for 3\nI0810 00:37:00.062383 1671 log.go:181] (0xc00018c4d0) (0xc000a188c0) Create stream\nI0810 00:37:00.062399 1671 log.go:181] (0xc00018c4d0) (0xc000a188c0) Stream added, broadcasting: 5\nI0810 00:37:00.063303 1671 log.go:181] (0xc00018c4d0) Reply frame received for 5\nI0810 00:37:00.121264 1671 log.go:181] (0xc00018c4d0) Data frame received for 5\nI0810 00:37:00.121315 1671 log.go:181] (0xc000a188c0) (5) Data frame handling\nI0810 00:37:00.121353 1671 log.go:181] (0xc000a188c0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 31438\nConnection to 172.18.0.14 31438 port [tcp/31438] succeeded!\nI0810 00:37:00.121377 1671 log.go:181] (0xc00018c4d0) Data frame received for 3\nI0810 00:37:00.121403 1671 log.go:181] (0xc000b3d0e0) (3) Data frame handling\nI0810 00:37:00.121478 1671 log.go:181] (0xc00018c4d0) Data frame received for 5\nI0810 00:37:00.121496 1671 log.go:181] (0xc000a188c0) (5) Data frame handling\nI0810 00:37:00.123043 1671 log.go:181] (0xc00018c4d0) Data frame received for 1\nI0810 00:37:00.123066 1671 log.go:181] (0xc000b479a0) (1) Data frame handling\nI0810 00:37:00.123080 1671 log.go:181] (0xc000b479a0) (1) Data frame sent\nI0810 00:37:00.123096 1671 log.go:181] (0xc00018c4d0) (0xc000b479a0) Stream removed, broadcasting: 1\nI0810 00:37:00.123111 1671 log.go:181] (0xc00018c4d0) Go away received\nI0810 00:37:00.123486 1671 log.go:181] (0xc00018c4d0) (0xc000b479a0) Stream removed, broadcasting: 1\nI0810 00:37:00.123511 1671 log.go:181] (0xc00018c4d0) (0xc000b3d0e0) Stream removed, broadcasting: 3\nI0810 00:37:00.123540 1671 log.go:181] (0xc00018c4d0) (0xc000a188c0) Stream removed, broadcasting: 5\n" Aug 10 00:37:00.128: INFO: stdout: "" Aug 10 00:37:00.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9692 execpod-affinitylpzw2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31438' Aug 10 00:37:00.325: INFO: stderr: "I0810 00:37:00.255546 1689 log.go:181] (0xc000e07080) (0xc0009a2280) Create stream\nI0810 00:37:00.255597 1689 log.go:181] (0xc000e07080) (0xc0009a2280) Stream added, broadcasting: 1\nI0810 00:37:00.261646 1689 log.go:181] (0xc000e07080) Reply frame received for 1\nI0810 00:37:00.261695 1689 log.go:181] (0xc000e07080) (0xc000a96320) Create stream\nI0810 00:37:00.261712 1689 log.go:181] (0xc000e07080) (0xc000a96320) Stream added, broadcasting: 3\nI0810 00:37:00.262744 1689 log.go:181] (0xc000e07080) Reply frame received for 3\nI0810 00:37:00.262799 1689 log.go:181] (0xc000e07080) (0xc0005c0a00) Create stream\nI0810 00:37:00.262830 1689 log.go:181] (0xc000e07080) (0xc0005c0a00) Stream added, broadcasting: 5\nI0810 00:37:00.263632 1689 log.go:181] (0xc000e07080) Reply frame received for 5\nI0810 00:37:00.318905 1689 log.go:181] (0xc000e07080) Data frame received for 3\nI0810 00:37:00.318942 1689 log.go:181] (0xc000a96320) (3) Data frame handling\nI0810 00:37:00.318964 1689 log.go:181] (0xc000e07080) Data frame received for 5\nI0810 00:37:00.318971 1689 log.go:181] (0xc0005c0a00) (5) Data frame handling\nI0810 00:37:00.318980 1689 log.go:181] (0xc0005c0a00) (5) Data frame sent\nI0810 00:37:00.318988 1689 log.go:181] (0xc000e07080) Data frame received for 5\nI0810 00:37:00.318996 1689 log.go:181] (0xc0005c0a00) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 31438\nConnection to 172.18.0.12 31438 port [tcp/31438] succeeded!\nI0810 00:37:00.320305 1689 log.go:181] (0xc000e07080) Data frame received for 1\nI0810 00:37:00.320342 1689 log.go:181] (0xc0009a2280) (1) Data frame handling\nI0810 00:37:00.320369 1689 log.go:181] (0xc0009a2280) (1) Data frame sent\nI0810 00:37:00.320396 1689 log.go:181] (0xc000e07080) (0xc0009a2280) Stream removed, broadcasting: 1\nI0810 00:37:00.320423 1689 log.go:181] (0xc000e07080) Go away received\nI0810 00:37:00.320803 1689 log.go:181] (0xc000e07080) (0xc0009a2280) Stream removed, broadcasting: 1\nI0810 00:37:00.320824 1689 log.go:181] (0xc000e07080) (0xc000a96320) Stream removed, broadcasting: 3\nI0810 00:37:00.320832 1689 log.go:181] (0xc000e07080) (0xc0005c0a00) Stream removed, broadcasting: 5\n" Aug 10 00:37:00.325: INFO: stdout: "" Aug 10 00:37:00.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9692 execpod-affinitylpzw2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:31438/ ; done' Aug 10 00:37:00.633: INFO: stderr: "I0810 00:37:00.446278 1707 log.go:181] (0xc000657290) (0xc000b797c0) Create stream\nI0810 00:37:00.446342 1707 log.go:181] (0xc000657290) (0xc000b797c0) Stream added, broadcasting: 1\nI0810 00:37:00.449603 1707 log.go:181] (0xc000657290) Reply frame received for 1\nI0810 00:37:00.449642 1707 log.go:181] (0xc000657290) (0xc000d26140) Create stream\nI0810 00:37:00.449652 1707 log.go:181] (0xc000657290) (0xc000d26140) Stream added, broadcasting: 3\nI0810 00:37:00.450441 1707 log.go:181] (0xc000657290) Reply frame received for 3\nI0810 00:37:00.450480 1707 log.go:181] (0xc000657290) (0xc000d261e0) Create stream\nI0810 00:37:00.450498 1707 log.go:181] (0xc000657290) (0xc000d261e0) Stream added, broadcasting: 5\nI0810 00:37:00.451271 1707 log.go:181] (0xc000657290) Reply frame received for 5\nI0810 00:37:00.525365 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.525409 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.525425 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.525447 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.525459 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.525472 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.531970 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.531994 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.532013 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.532688 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.532711 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.532801 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.532817 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.532826 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.532833 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.538135 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.538150 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.538168 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.539126 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.539151 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.539162 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.539179 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.539188 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.539198 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.546698 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.546718 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.546735 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.547387 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.547407 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.547430 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.547464 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.547478 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.547487 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.553311 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.553344 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.553370 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.554378 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.554407 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.554420 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.554448 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.554484 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.554533 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.562030 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.562064 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.562105 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.563621 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.563660 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.563678 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.563695 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.563737 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.563765 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.568227 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.568261 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.568291 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.568637 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.568663 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.568696 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.568838 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.568861 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.568880 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.573270 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.573285 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.573294 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.573758 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.573803 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.573832 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.573851 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.573871 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.573887 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.581083 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.581100 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.581109 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.581948 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.581960 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.581967 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.581986 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.582012 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.582028 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\nI0810 00:37:00.582039 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.582047 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.582075 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\nI0810 00:37:00.587113 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.587128 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.587136 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.587550 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.587580 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.587593 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.587610 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.587620 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.587629 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\nI0810 00:37:00.587644 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.587661 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.587680 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\nI0810 00:37:00.590897 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.590909 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.590916 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.591758 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.591768 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.591780 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.591792 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.591809 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.591814 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.595210 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.595240 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.595259 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.596290 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.596309 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.596320 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.596356 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.596376 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.596386 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.602572 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.602599 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.602626 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.603204 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.603218 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.603224 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.603242 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.603258 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.603275 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.607601 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.607632 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.607660 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.608609 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.608622 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.608629 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.608647 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.608666 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.608684 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.612291 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.612315 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.612334 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.612942 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.612965 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.612977 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.613051 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.613069 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.613084 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.617535 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.617548 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.617554 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.618455 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.618467 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.618473 1707 log.go:181] (0xc000d261e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.618554 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.618597 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.618639 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.624454 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.624469 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.624482 1707 log.go:181] (0xc000d26140) (3) Data frame sent\nI0810 00:37:00.625775 1707 log.go:181] (0xc000657290) Data frame received for 5\nI0810 00:37:00.625790 1707 log.go:181] (0xc000d261e0) (5) Data frame handling\nI0810 00:37:00.625806 1707 log.go:181] (0xc000657290) Data frame received for 3\nI0810 00:37:00.625825 1707 log.go:181] (0xc000d26140) (3) Data frame handling\nI0810 00:37:00.627908 1707 log.go:181] (0xc000657290) Data frame received for 1\nI0810 00:37:00.627924 1707 log.go:181] (0xc000b797c0) (1) Data frame handling\nI0810 00:37:00.627943 1707 log.go:181] (0xc000b797c0) (1) Data frame sent\nI0810 00:37:00.627957 1707 log.go:181] (0xc000657290) (0xc000b797c0) Stream removed, broadcasting: 1\nI0810 00:37:00.627977 1707 log.go:181] (0xc000657290) Go away received\nI0810 00:37:00.628439 1707 log.go:181] (0xc000657290) (0xc000b797c0) Stream removed, broadcasting: 1\nI0810 00:37:00.628457 1707 log.go:181] (0xc000657290) (0xc000d26140) Stream removed, broadcasting: 3\nI0810 00:37:00.628467 1707 log.go:181] (0xc000657290) (0xc000d261e0) Stream removed, broadcasting: 5\n" Aug 10 00:37:00.634: INFO: stdout: "\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc\naffinity-nodeport-timeout-lb5xc" Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Received response from host: affinity-nodeport-timeout-lb5xc Aug 10 00:37:00.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9692 execpod-affinitylpzw2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:31438/' Aug 10 00:37:00.881: INFO: stderr: "I0810 00:37:00.789798 1724 log.go:181] (0xc000141810) (0xc000b57ea0) Create stream\nI0810 00:37:00.789864 1724 log.go:181] (0xc000141810) (0xc000b57ea0) Stream added, broadcasting: 1\nI0810 00:37:00.791750 1724 log.go:181] (0xc000141810) Reply frame received for 1\nI0810 00:37:00.791788 1724 log.go:181] (0xc000141810) (0xc000b32320) Create stream\nI0810 00:37:00.791803 1724 log.go:181] (0xc000141810) (0xc000b32320) Stream added, broadcasting: 3\nI0810 00:37:00.792876 1724 log.go:181] (0xc000141810) Reply frame received for 3\nI0810 00:37:00.792918 1724 log.go:181] (0xc000141810) (0xc000b3cdc0) Create stream\nI0810 00:37:00.792931 1724 log.go:181] (0xc000141810) (0xc000b3cdc0) Stream added, broadcasting: 5\nI0810 00:37:00.793869 1724 log.go:181] (0xc000141810) Reply frame received for 5\nI0810 00:37:00.867419 1724 log.go:181] (0xc000141810) Data frame received for 5\nI0810 00:37:00.867441 1724 log.go:181] (0xc000b3cdc0) (5) Data frame handling\nI0810 00:37:00.867453 1724 log.go:181] (0xc000b3cdc0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:00.872031 1724 log.go:181] (0xc000141810) Data frame received for 3\nI0810 00:37:00.872052 1724 log.go:181] (0xc000b32320) (3) Data frame handling\nI0810 00:37:00.872064 1724 log.go:181] (0xc000b32320) (3) Data frame sent\nI0810 00:37:00.873041 1724 log.go:181] (0xc000141810) Data frame received for 3\nI0810 00:37:00.873065 1724 log.go:181] (0xc000b32320) (3) Data frame handling\nI0810 00:37:00.873284 1724 log.go:181] (0xc000141810) Data frame received for 5\nI0810 00:37:00.873298 1724 log.go:181] (0xc000b3cdc0) (5) Data frame handling\nI0810 00:37:00.874941 1724 log.go:181] (0xc000141810) Data frame received for 1\nI0810 00:37:00.874959 1724 log.go:181] (0xc000b57ea0) (1) Data frame handling\nI0810 00:37:00.874969 1724 log.go:181] (0xc000b57ea0) (1) Data frame sent\nI0810 00:37:00.874978 1724 log.go:181] (0xc000141810) (0xc000b57ea0) Stream removed, broadcasting: 1\nI0810 00:37:00.875003 1724 log.go:181] (0xc000141810) Go away received\nI0810 00:37:00.875311 1724 log.go:181] (0xc000141810) (0xc000b57ea0) Stream removed, broadcasting: 1\nI0810 00:37:00.875325 1724 log.go:181] (0xc000141810) (0xc000b32320) Stream removed, broadcasting: 3\nI0810 00:37:00.875331 1724 log.go:181] (0xc000141810) (0xc000b3cdc0) Stream removed, broadcasting: 5\n" Aug 10 00:37:00.881: INFO: stdout: "affinity-nodeport-timeout-lb5xc" Aug 10 00:37:15.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9692 execpod-affinitylpzw2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.14:31438/' Aug 10 00:37:16.135: INFO: stderr: "I0810 00:37:16.017889 1742 log.go:181] (0xc00015cdc0) (0xc000d76460) Create stream\nI0810 00:37:16.017943 1742 log.go:181] (0xc00015cdc0) (0xc000d76460) Stream added, broadcasting: 1\nI0810 00:37:16.022896 1742 log.go:181] (0xc00015cdc0) Reply frame received for 1\nI0810 00:37:16.022953 1742 log.go:181] (0xc00015cdc0) (0xc0008be460) Create stream\nI0810 00:37:16.022974 1742 log.go:181] (0xc00015cdc0) (0xc0008be460) Stream added, broadcasting: 3\nI0810 00:37:16.024086 1742 log.go:181] (0xc00015cdc0) Reply frame received for 3\nI0810 00:37:16.024122 1742 log.go:181] (0xc00015cdc0) (0xc00044ab40) Create stream\nI0810 00:37:16.024130 1742 log.go:181] (0xc00015cdc0) (0xc00044ab40) Stream added, broadcasting: 5\nI0810 00:37:16.025251 1742 log.go:181] (0xc00015cdc0) Reply frame received for 5\nI0810 00:37:16.125424 1742 log.go:181] (0xc00015cdc0) Data frame received for 5\nI0810 00:37:16.125454 1742 log.go:181] (0xc00044ab40) (5) Data frame handling\nI0810 00:37:16.125474 1742 log.go:181] (0xc00044ab40) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31438/\nI0810 00:37:16.126574 1742 log.go:181] (0xc00015cdc0) Data frame received for 3\nI0810 00:37:16.126589 1742 log.go:181] (0xc0008be460) (3) Data frame handling\nI0810 00:37:16.126605 1742 log.go:181] (0xc0008be460) (3) Data frame sent\nI0810 00:37:16.127069 1742 log.go:181] (0xc00015cdc0) Data frame received for 3\nI0810 00:37:16.127176 1742 log.go:181] (0xc0008be460) (3) Data frame handling\nI0810 00:37:16.127462 1742 log.go:181] (0xc00015cdc0) Data frame received for 5\nI0810 00:37:16.127484 1742 log.go:181] (0xc00044ab40) (5) Data frame handling\nI0810 00:37:16.128482 1742 log.go:181] (0xc00015cdc0) Data frame received for 1\nI0810 00:37:16.128510 1742 log.go:181] (0xc000d76460) (1) Data frame handling\nI0810 00:37:16.128531 1742 log.go:181] (0xc000d76460) (1) Data frame sent\nI0810 00:37:16.128548 1742 log.go:181] (0xc00015cdc0) (0xc000d76460) Stream removed, broadcasting: 1\nI0810 00:37:16.128564 1742 log.go:181] (0xc00015cdc0) Go away received\nI0810 00:37:16.129101 1742 log.go:181] (0xc00015cdc0) (0xc000d76460) Stream removed, broadcasting: 1\nI0810 00:37:16.129128 1742 log.go:181] (0xc00015cdc0) (0xc0008be460) Stream removed, broadcasting: 3\nI0810 00:37:16.129140 1742 log.go:181] (0xc00015cdc0) (0xc00044ab40) Stream removed, broadcasting: 5\n" Aug 10 00:37:16.135: INFO: stdout: "affinity-nodeport-timeout-4j7ml" Aug 10 00:37:16.135: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-9692, will wait for the garbage collector to delete the pods Aug 10 00:37:16.240: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.071777ms Aug 10 00:37:16.640: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 400.349726ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:37:24.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9692" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:57.071 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":206,"skipped":3289,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:37:24.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 10 00:37:24.123: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:37:30.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-578" for this suite. • [SLOW TEST:6.784 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":207,"skipped":3299,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:37:30.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-19fe3300-464a-4ee0-9199-7ac0d7be61b2 STEP: Creating a pod to test consume configMaps Aug 10 00:37:30.887: INFO: Waiting up to 5m0s for pod "pod-configmaps-013525ea-40b1-4f49-8120-0c6885c5d5ba" in namespace "configmap-8464" to be "Succeeded or Failed" Aug 10 00:37:30.937: INFO: Pod "pod-configmaps-013525ea-40b1-4f49-8120-0c6885c5d5ba": Phase="Pending", Reason="", readiness=false. Elapsed: 50.648374ms Aug 10 00:37:32.942: INFO: Pod "pod-configmaps-013525ea-40b1-4f49-8120-0c6885c5d5ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054864271s Aug 10 00:37:34.946: INFO: Pod "pod-configmaps-013525ea-40b1-4f49-8120-0c6885c5d5ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059720909s STEP: Saw pod success Aug 10 00:37:34.947: INFO: Pod "pod-configmaps-013525ea-40b1-4f49-8120-0c6885c5d5ba" satisfied condition "Succeeded or Failed" Aug 10 00:37:34.950: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-013525ea-40b1-4f49-8120-0c6885c5d5ba container configmap-volume-test: STEP: delete the pod Aug 10 00:37:34.971: INFO: Waiting for pod pod-configmaps-013525ea-40b1-4f49-8120-0c6885c5d5ba to disappear Aug 10 00:37:34.989: INFO: Pod pod-configmaps-013525ea-40b1-4f49-8120-0c6885c5d5ba no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:37:34.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8464" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":208,"skipped":3311,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:37:34.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 00:37:35.183: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fcbbe579-418c-4294-b6f7-e682f08c4e41" in namespace "downward-api-4267" to be "Succeeded or Failed" Aug 10 00:37:35.185: INFO: Pod "downwardapi-volume-fcbbe579-418c-4294-b6f7-e682f08c4e41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069064ms Aug 10 00:37:37.189: INFO: Pod "downwardapi-volume-fcbbe579-418c-4294-b6f7-e682f08c4e41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006676214s Aug 10 00:37:39.193: INFO: Pod "downwardapi-volume-fcbbe579-418c-4294-b6f7-e682f08c4e41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010377709s STEP: Saw pod success Aug 10 00:37:39.193: INFO: Pod "downwardapi-volume-fcbbe579-418c-4294-b6f7-e682f08c4e41" satisfied condition "Succeeded or Failed" Aug 10 00:37:39.196: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-fcbbe579-418c-4294-b6f7-e682f08c4e41 container client-container: STEP: delete the pod Aug 10 00:37:39.292: INFO: Waiting for pod downwardapi-volume-fcbbe579-418c-4294-b6f7-e682f08c4e41 to disappear Aug 10 00:37:39.305: INFO: Pod downwardapi-volume-fcbbe579-418c-4294-b6f7-e682f08c4e41 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:37:39.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4267" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":209,"skipped":3368,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:37:39.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Aug 10 00:37:39.368: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Aug 10 00:37:40.245: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Aug 10 00:37:42.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616660, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616660, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616660, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616660, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:37:44.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616660, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616660, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616660, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732616660, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67c46cd746\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:37:47.599: INFO: Waited 824.437342ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:37:48.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9963" for this suite. • [SLOW TEST:8.921 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":210,"skipped":3399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:37:48.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-ce2bd9e5-a2d9-4a34-ad40-02ba32bda619 STEP: Creating a pod to test consume configMaps Aug 10 00:37:48.785: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a7b38683-4e14-470a-ae3f-48ddfb6e4769" in namespace "projected-3847" to be "Succeeded or Failed" Aug 10 00:37:48.851: INFO: Pod "pod-projected-configmaps-a7b38683-4e14-470a-ae3f-48ddfb6e4769": Phase="Pending", Reason="", readiness=false. Elapsed: 66.019658ms Aug 10 00:37:50.986: INFO: Pod "pod-projected-configmaps-a7b38683-4e14-470a-ae3f-48ddfb6e4769": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200636824s Aug 10 00:37:52.991: INFO: Pod "pod-projected-configmaps-a7b38683-4e14-470a-ae3f-48ddfb6e4769": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205596633s STEP: Saw pod success Aug 10 00:37:52.991: INFO: Pod "pod-projected-configmaps-a7b38683-4e14-470a-ae3f-48ddfb6e4769" satisfied condition "Succeeded or Failed" Aug 10 00:37:52.994: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-a7b38683-4e14-470a-ae3f-48ddfb6e4769 container projected-configmap-volume-test: STEP: delete the pod Aug 10 00:37:53.292: INFO: Waiting for pod pod-projected-configmaps-a7b38683-4e14-470a-ae3f-48ddfb6e4769 to disappear Aug 10 00:37:53.318: INFO: Pod pod-projected-configmaps-a7b38683-4e14-470a-ae3f-48ddfb6e4769 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:37:53.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3847" for this suite. • [SLOW TEST:5.100 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":211,"skipped":3453,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:37:53.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Aug 10 00:37:53.379: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix947453358/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:37:53.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1351" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":212,"skipped":3456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:37:53.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-1184 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-1184 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1184 Aug 10 00:37:53.686: INFO: Found 0 stateful pods, waiting for 1 Aug 10 00:38:03.696: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 10 00:38:03.698: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1184 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 00:38:03.974: INFO: stderr: "I0810 00:38:03.820640 1773 log.go:181] (0xc00003a4d0) (0xc000a70460) Create stream\nI0810 00:38:03.820682 1773 log.go:181] (0xc00003a4d0) (0xc000a70460) Stream added, broadcasting: 1\nI0810 00:38:03.822566 1773 log.go:181] (0xc00003a4d0) Reply frame received for 1\nI0810 00:38:03.822610 1773 log.go:181] (0xc00003a4d0) (0xc000a6a000) Create stream\nI0810 00:38:03.822623 1773 log.go:181] (0xc00003a4d0) (0xc000a6a000) Stream added, broadcasting: 3\nI0810 00:38:03.823527 1773 log.go:181] (0xc00003a4d0) Reply frame received for 3\nI0810 00:38:03.823582 1773 log.go:181] (0xc00003a4d0) (0xc000a5afa0) Create stream\nI0810 00:38:03.823603 1773 log.go:181] (0xc00003a4d0) (0xc000a5afa0) Stream added, broadcasting: 5\nI0810 00:38:03.825429 1773 log.go:181] (0xc00003a4d0) Reply frame received for 5\nI0810 00:38:03.888363 1773 log.go:181] (0xc00003a4d0) Data frame received for 5\nI0810 00:38:03.888389 1773 log.go:181] (0xc000a5afa0) (5) Data frame handling\nI0810 00:38:03.888402 1773 log.go:181] (0xc000a5afa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 00:38:03.966019 1773 log.go:181] (0xc00003a4d0) Data frame received for 3\nI0810 00:38:03.966071 1773 log.go:181] (0xc000a6a000) (3) Data frame handling\nI0810 00:38:03.966096 1773 log.go:181] (0xc000a6a000) (3) Data frame sent\nI0810 00:38:03.966112 1773 log.go:181] (0xc00003a4d0) Data frame received for 3\nI0810 00:38:03.966125 1773 log.go:181] (0xc000a6a000) (3) Data frame handling\nI0810 00:38:03.966159 1773 log.go:181] (0xc00003a4d0) Data frame received for 5\nI0810 00:38:03.966187 1773 log.go:181] (0xc000a5afa0) (5) Data frame handling\nI0810 00:38:03.968835 1773 log.go:181] (0xc00003a4d0) Data frame received for 1\nI0810 00:38:03.968855 1773 log.go:181] (0xc000a70460) (1) Data frame handling\nI0810 00:38:03.968866 1773 log.go:181] (0xc000a70460) (1) Data frame sent\nI0810 00:38:03.968877 1773 log.go:181] (0xc00003a4d0) (0xc000a70460) Stream removed, broadcasting: 1\nI0810 00:38:03.968924 1773 log.go:181] (0xc00003a4d0) Go away received\nI0810 00:38:03.969264 1773 log.go:181] (0xc00003a4d0) (0xc000a70460) Stream removed, broadcasting: 1\nI0810 00:38:03.969281 1773 log.go:181] (0xc00003a4d0) (0xc000a6a000) Stream removed, broadcasting: 3\nI0810 00:38:03.969289 1773 log.go:181] (0xc00003a4d0) (0xc000a5afa0) Stream removed, broadcasting: 5\n" Aug 10 00:38:03.975: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 00:38:03.975: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 10 00:38:03.979: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 10 00:38:13.983: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 10 00:38:13.983: INFO: Waiting for statefulset status.replicas updated to 0 Aug 10 00:38:14.031: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 00:38:14.031: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:37:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:37:53 +0000 UTC }] Aug 10 00:38:14.031: INFO: Aug 10 00:38:14.031: INFO: StatefulSet ss has not reached scale 3, at 1 Aug 10 00:38:15.036: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996592737s Aug 10 00:38:16.395: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991310814s Aug 10 00:38:17.484: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.632408689s Aug 10 00:38:18.538: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.543163647s Aug 10 00:38:19.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.489833285s Aug 10 00:38:20.578: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.454141091s Aug 10 00:38:21.584: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.449267435s Aug 10 00:38:22.589: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.443877543s Aug 10 00:38:23.594: INFO: Verifying statefulset ss doesn't scale past 3 for another 438.726726ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1184 Aug 10 00:38:24.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1184 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 00:38:24.837: INFO: stderr: "I0810 00:38:24.755832 1791 log.go:181] (0xc000bc9080) (0xc000ad1360) Create stream\nI0810 00:38:24.755885 1791 log.go:181] (0xc000bc9080) (0xc000ad1360) Stream added, broadcasting: 1\nI0810 00:38:24.758670 1791 log.go:181] (0xc000bc9080) Reply frame received for 1\nI0810 00:38:24.758737 1791 log.go:181] (0xc000bc9080) (0xc0005abf40) Create stream\nI0810 00:38:24.758761 1791 log.go:181] (0xc000bc9080) (0xc0005abf40) Stream added, broadcasting: 3\nI0810 00:38:24.759696 1791 log.go:181] (0xc000bc9080) Reply frame received for 3\nI0810 00:38:24.760177 1791 log.go:181] (0xc000bc9080) (0xc0009320a0) Create stream\nI0810 00:38:24.760214 1791 log.go:181] (0xc000bc9080) (0xc0009320a0) Stream added, broadcasting: 5\nI0810 00:38:24.762035 1791 log.go:181] (0xc000bc9080) Reply frame received for 5\nI0810 00:38:24.831216 1791 log.go:181] (0xc000bc9080) Data frame received for 5\nI0810 00:38:24.831244 1791 log.go:181] (0xc0009320a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0810 00:38:24.831285 1791 log.go:181] (0xc000bc9080) Data frame received for 3\nI0810 00:38:24.831320 1791 log.go:181] (0xc0005abf40) (3) Data frame handling\nI0810 00:38:24.831331 1791 log.go:181] (0xc0005abf40) (3) Data frame sent\nI0810 00:38:24.831337 1791 log.go:181] (0xc000bc9080) Data frame received for 3\nI0810 00:38:24.831343 1791 log.go:181] (0xc0005abf40) (3) Data frame handling\nI0810 00:38:24.831372 1791 log.go:181] (0xc0009320a0) (5) Data frame sent\nI0810 00:38:24.831387 1791 log.go:181] (0xc000bc9080) Data frame received for 5\nI0810 00:38:24.831393 1791 log.go:181] (0xc0009320a0) (5) Data frame handling\nI0810 00:38:24.832907 1791 log.go:181] (0xc000bc9080) Data frame received for 1\nI0810 00:38:24.832932 1791 log.go:181] (0xc000ad1360) (1) Data frame handling\nI0810 00:38:24.832943 1791 log.go:181] (0xc000ad1360) (1) Data frame sent\nI0810 00:38:24.832957 1791 log.go:181] (0xc000bc9080) (0xc000ad1360) Stream removed, broadcasting: 1\nI0810 00:38:24.832969 1791 log.go:181] (0xc000bc9080) Go away received\nI0810 00:38:24.833392 1791 log.go:181] (0xc000bc9080) (0xc000ad1360) Stream removed, broadcasting: 1\nI0810 00:38:24.833407 1791 log.go:181] (0xc000bc9080) (0xc0005abf40) Stream removed, broadcasting: 3\nI0810 00:38:24.833414 1791 log.go:181] (0xc000bc9080) (0xc0009320a0) Stream removed, broadcasting: 5\n" Aug 10 00:38:24.838: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 10 00:38:24.838: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 10 00:38:24.838: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1184 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 00:38:25.064: INFO: stderr: "I0810 00:38:24.972027 1809 log.go:181] (0xc000b2d080) (0xc000c64be0) Create stream\nI0810 00:38:24.972093 1809 log.go:181] (0xc000b2d080) (0xc000c64be0) Stream added, broadcasting: 1\nI0810 00:38:24.974562 1809 log.go:181] (0xc000b2d080) Reply frame received for 1\nI0810 00:38:24.974614 1809 log.go:181] (0xc000b2d080) (0xc000c530e0) Create stream\nI0810 00:38:24.974629 1809 log.go:181] (0xc000b2d080) (0xc000c530e0) Stream added, broadcasting: 3\nI0810 00:38:24.975693 1809 log.go:181] (0xc000b2d080) Reply frame received for 3\nI0810 00:38:24.975740 1809 log.go:181] (0xc000b2d080) (0xc000b108c0) Create stream\nI0810 00:38:24.975754 1809 log.go:181] (0xc000b2d080) (0xc000b108c0) Stream added, broadcasting: 5\nI0810 00:38:24.976559 1809 log.go:181] (0xc000b2d080) Reply frame received for 5\nI0810 00:38:25.054817 1809 log.go:181] (0xc000b2d080) Data frame received for 5\nI0810 00:38:25.054885 1809 log.go:181] (0xc000b108c0) (5) Data frame handling\nI0810 00:38:25.054902 1809 log.go:181] (0xc000b108c0) (5) Data frame sent\nI0810 00:38:25.054915 1809 log.go:181] (0xc000b2d080) Data frame received for 5\nI0810 00:38:25.054925 1809 log.go:181] (0xc000b108c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0810 00:38:25.054997 1809 log.go:181] (0xc000b2d080) Data frame received for 3\nI0810 00:38:25.055037 1809 log.go:181] (0xc000c530e0) (3) Data frame handling\nI0810 00:38:25.055066 1809 log.go:181] (0xc000c530e0) (3) Data frame sent\nI0810 00:38:25.055090 1809 log.go:181] (0xc000b2d080) Data frame received for 3\nI0810 00:38:25.055108 1809 log.go:181] (0xc000c530e0) (3) Data frame handling\nI0810 00:38:25.057319 1809 log.go:181] (0xc000b2d080) Data frame received for 1\nI0810 00:38:25.057347 1809 log.go:181] (0xc000c64be0) (1) Data frame handling\nI0810 00:38:25.057376 1809 log.go:181] (0xc000c64be0) (1) Data frame sent\nI0810 00:38:25.057391 1809 log.go:181] (0xc000b2d080) (0xc000c64be0) Stream removed, broadcasting: 1\nI0810 00:38:25.057408 1809 log.go:181] (0xc000b2d080) Go away received\nI0810 00:38:25.057858 1809 log.go:181] (0xc000b2d080) (0xc000c64be0) Stream removed, broadcasting: 1\nI0810 00:38:25.057875 1809 log.go:181] (0xc000b2d080) (0xc000c530e0) Stream removed, broadcasting: 3\nI0810 00:38:25.057883 1809 log.go:181] (0xc000b2d080) (0xc000b108c0) Stream removed, broadcasting: 5\n" Aug 10 00:38:25.065: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 10 00:38:25.065: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 10 00:38:25.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1184 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 00:38:25.275: INFO: stderr: "I0810 00:38:25.197340 1827 log.go:181] (0xc000adafd0) (0xc000ab46e0) Create stream\nI0810 00:38:25.197395 1827 log.go:181] (0xc000adafd0) (0xc000ab46e0) Stream added, broadcasting: 1\nI0810 00:38:25.202790 1827 log.go:181] (0xc000adafd0) Reply frame received for 1\nI0810 00:38:25.202833 1827 log.go:181] (0xc000adafd0) (0xc0008306e0) Create stream\nI0810 00:38:25.202844 1827 log.go:181] (0xc000adafd0) (0xc0008306e0) Stream added, broadcasting: 3\nI0810 00:38:25.203744 1827 log.go:181] (0xc000adafd0) Reply frame received for 3\nI0810 00:38:25.203777 1827 log.go:181] (0xc000adafd0) (0xc0004bedc0) Create stream\nI0810 00:38:25.203787 1827 log.go:181] (0xc000adafd0) (0xc0004bedc0) Stream added, broadcasting: 5\nI0810 00:38:25.204924 1827 log.go:181] (0xc000adafd0) Reply frame received for 5\nI0810 00:38:25.267149 1827 log.go:181] (0xc000adafd0) Data frame received for 3\nI0810 00:38:25.267193 1827 log.go:181] (0xc0008306e0) (3) Data frame handling\nI0810 00:38:25.267216 1827 log.go:181] (0xc0008306e0) (3) Data frame sent\nI0810 00:38:25.267231 1827 log.go:181] (0xc000adafd0) Data frame received for 3\nI0810 00:38:25.267242 1827 log.go:181] (0xc0008306e0) (3) Data frame handling\nI0810 00:38:25.267296 1827 log.go:181] (0xc000adafd0) Data frame received for 5\nI0810 00:38:25.267328 1827 log.go:181] (0xc0004bedc0) (5) Data frame handling\nI0810 00:38:25.267354 1827 log.go:181] (0xc0004bedc0) (5) Data frame sent\nI0810 00:38:25.267372 1827 log.go:181] (0xc000adafd0) Data frame received for 5\nI0810 00:38:25.267396 1827 log.go:181] (0xc0004bedc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0810 00:38:25.269316 1827 log.go:181] (0xc000adafd0) Data frame received for 1\nI0810 00:38:25.269339 1827 log.go:181] (0xc000ab46e0) (1) Data frame handling\nI0810 00:38:25.269346 1827 log.go:181] (0xc000ab46e0) (1) Data frame sent\nI0810 00:38:25.269356 1827 log.go:181] (0xc000adafd0) (0xc000ab46e0) Stream removed, broadcasting: 1\nI0810 00:38:25.269364 1827 log.go:181] (0xc000adafd0) Go away received\nI0810 00:38:25.269910 1827 log.go:181] (0xc000adafd0) (0xc000ab46e0) Stream removed, broadcasting: 1\nI0810 00:38:25.269935 1827 log.go:181] (0xc000adafd0) (0xc0008306e0) Stream removed, broadcasting: 3\nI0810 00:38:25.269947 1827 log.go:181] (0xc000adafd0) (0xc0004bedc0) Stream removed, broadcasting: 5\n" Aug 10 00:38:25.276: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 10 00:38:25.276: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 10 00:38:25.280: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Aug 10 00:38:35.287: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:38:35.287: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:38:35.287: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 10 00:38:35.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1184 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 00:38:35.538: INFO: stderr: "I0810 00:38:35.431554 1845 log.go:181] (0xc000e231e0) (0xc000e92780) Create stream\nI0810 00:38:35.431607 1845 log.go:181] (0xc000e231e0) (0xc000e92780) Stream added, broadcasting: 1\nI0810 00:38:35.436124 1845 log.go:181] (0xc000e231e0) Reply frame received for 1\nI0810 00:38:35.436165 1845 log.go:181] (0xc000e231e0) (0xc000792aa0) Create stream\nI0810 00:38:35.436174 1845 log.go:181] (0xc000e231e0) (0xc000792aa0) Stream added, broadcasting: 3\nI0810 00:38:35.436999 1845 log.go:181] (0xc000e231e0) Reply frame received for 3\nI0810 00:38:35.437036 1845 log.go:181] (0xc000e231e0) (0xc000594b40) Create stream\nI0810 00:38:35.437045 1845 log.go:181] (0xc000e231e0) (0xc000594b40) Stream added, broadcasting: 5\nI0810 00:38:35.437780 1845 log.go:181] (0xc000e231e0) Reply frame received for 5\nI0810 00:38:35.530270 1845 log.go:181] (0xc000e231e0) Data frame received for 3\nI0810 00:38:35.530318 1845 log.go:181] (0xc000792aa0) (3) Data frame handling\nI0810 00:38:35.530343 1845 log.go:181] (0xc000792aa0) (3) Data frame sent\nI0810 00:38:35.530359 1845 log.go:181] (0xc000e231e0) Data frame received for 3\nI0810 00:38:35.530378 1845 log.go:181] (0xc000792aa0) (3) Data frame handling\nI0810 00:38:35.530395 1845 log.go:181] (0xc000e231e0) Data frame received for 5\nI0810 00:38:35.530406 1845 log.go:181] (0xc000594b40) (5) Data frame handling\nI0810 00:38:35.530428 1845 log.go:181] (0xc000594b40) (5) Data frame sent\nI0810 00:38:35.530452 1845 log.go:181] (0xc000e231e0) Data frame received for 5\nI0810 00:38:35.530464 1845 log.go:181] (0xc000594b40) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 00:38:35.532255 1845 log.go:181] (0xc000e231e0) Data frame received for 1\nI0810 00:38:35.532282 1845 log.go:181] (0xc000e92780) (1) Data frame handling\nI0810 00:38:35.532307 1845 log.go:181] (0xc000e92780) (1) Data frame sent\nI0810 00:38:35.532333 1845 log.go:181] (0xc000e231e0) (0xc000e92780) Stream removed, broadcasting: 1\nI0810 00:38:35.532354 1845 log.go:181] (0xc000e231e0) Go away received\nI0810 00:38:35.532965 1845 log.go:181] (0xc000e231e0) (0xc000e92780) Stream removed, broadcasting: 1\nI0810 00:38:35.532986 1845 log.go:181] (0xc000e231e0) (0xc000792aa0) Stream removed, broadcasting: 3\nI0810 00:38:35.532998 1845 log.go:181] (0xc000e231e0) (0xc000594b40) Stream removed, broadcasting: 5\n" Aug 10 00:38:35.538: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 00:38:35.538: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 10 00:38:35.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1184 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 00:38:35.781: INFO: stderr: "I0810 00:38:35.667267 1863 log.go:181] (0xc0009e2fd0) (0xc000d9a3c0) Create stream\nI0810 00:38:35.667326 1863 log.go:181] (0xc0009e2fd0) (0xc000d9a3c0) Stream added, broadcasting: 1\nI0810 00:38:35.672281 1863 log.go:181] (0xc0009e2fd0) Reply frame received for 1\nI0810 00:38:35.672334 1863 log.go:181] (0xc0009e2fd0) (0xc00095f180) Create stream\nI0810 00:38:35.672351 1863 log.go:181] (0xc0009e2fd0) (0xc00095f180) Stream added, broadcasting: 3\nI0810 00:38:35.673566 1863 log.go:181] (0xc0009e2fd0) Reply frame received for 3\nI0810 00:38:35.673602 1863 log.go:181] (0xc0009e2fd0) (0xc0007a81e0) Create stream\nI0810 00:38:35.673611 1863 log.go:181] (0xc0009e2fd0) (0xc0007a81e0) Stream added, broadcasting: 5\nI0810 00:38:35.674431 1863 log.go:181] (0xc0009e2fd0) Reply frame received for 5\nI0810 00:38:35.738609 1863 log.go:181] (0xc0009e2fd0) Data frame received for 5\nI0810 00:38:35.738638 1863 log.go:181] (0xc0007a81e0) (5) Data frame handling\nI0810 00:38:35.738658 1863 log.go:181] (0xc0007a81e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 00:38:35.772276 1863 log.go:181] (0xc0009e2fd0) Data frame received for 3\nI0810 00:38:35.772310 1863 log.go:181] (0xc00095f180) (3) Data frame handling\nI0810 00:38:35.772325 1863 log.go:181] (0xc00095f180) (3) Data frame sent\nI0810 00:38:35.772465 1863 log.go:181] (0xc0009e2fd0) Data frame received for 3\nI0810 00:38:35.772510 1863 log.go:181] (0xc00095f180) (3) Data frame handling\nI0810 00:38:35.773084 1863 log.go:181] (0xc0009e2fd0) Data frame received for 5\nI0810 00:38:35.773107 1863 log.go:181] (0xc0007a81e0) (5) Data frame handling\nI0810 00:38:35.775487 1863 log.go:181] (0xc0009e2fd0) Data frame received for 1\nI0810 00:38:35.775499 1863 log.go:181] (0xc000d9a3c0) (1) Data frame handling\nI0810 00:38:35.775509 1863 log.go:181] (0xc000d9a3c0) (1) Data frame sent\nI0810 00:38:35.775519 1863 log.go:181] (0xc0009e2fd0) (0xc000d9a3c0) Stream removed, broadcasting: 1\nI0810 00:38:35.775708 1863 log.go:181] (0xc0009e2fd0) Go away received\nI0810 00:38:35.775780 1863 log.go:181] (0xc0009e2fd0) (0xc000d9a3c0) Stream removed, broadcasting: 1\nI0810 00:38:35.775793 1863 log.go:181] (0xc0009e2fd0) (0xc00095f180) Stream removed, broadcasting: 3\nI0810 00:38:35.775799 1863 log.go:181] (0xc0009e2fd0) (0xc0007a81e0) Stream removed, broadcasting: 5\n" Aug 10 00:38:35.782: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 00:38:35.782: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 10 00:38:35.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1184 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 00:38:36.050: INFO: stderr: "I0810 00:38:35.928598 1881 log.go:181] (0xc000c74fd0) (0xc000b51400) Create stream\nI0810 00:38:35.928651 1881 log.go:181] (0xc000c74fd0) (0xc000b51400) Stream added, broadcasting: 1\nI0810 00:38:35.930527 1881 log.go:181] (0xc000c74fd0) Reply frame received for 1\nI0810 00:38:35.930591 1881 log.go:181] (0xc000c74fd0) (0xc0008f7b80) Create stream\nI0810 00:38:35.930612 1881 log.go:181] (0xc000c74fd0) (0xc0008f7b80) Stream added, broadcasting: 3\nI0810 00:38:35.931520 1881 log.go:181] (0xc000c74fd0) Reply frame received for 3\nI0810 00:38:35.931582 1881 log.go:181] (0xc000c74fd0) (0xc000728000) Create stream\nI0810 00:38:35.931604 1881 log.go:181] (0xc000c74fd0) (0xc000728000) Stream added, broadcasting: 5\nI0810 00:38:35.932317 1881 log.go:181] (0xc000c74fd0) Reply frame received for 5\nI0810 00:38:36.013311 1881 log.go:181] (0xc000c74fd0) Data frame received for 5\nI0810 00:38:36.013336 1881 log.go:181] (0xc000728000) (5) Data frame handling\nI0810 00:38:36.013349 1881 log.go:181] (0xc000728000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 00:38:36.042156 1881 log.go:181] (0xc000c74fd0) Data frame received for 3\nI0810 00:38:36.042200 1881 log.go:181] (0xc0008f7b80) (3) Data frame handling\nI0810 00:38:36.042245 1881 log.go:181] (0xc0008f7b80) (3) Data frame sent\nI0810 00:38:36.042280 1881 log.go:181] (0xc000c74fd0) Data frame received for 3\nI0810 00:38:36.042287 1881 log.go:181] (0xc0008f7b80) (3) Data frame handling\nI0810 00:38:36.043044 1881 log.go:181] (0xc000c74fd0) Data frame received for 5\nI0810 00:38:36.043078 1881 log.go:181] (0xc000728000) (5) Data frame handling\nI0810 00:38:36.044309 1881 log.go:181] (0xc000c74fd0) Data frame received for 1\nI0810 00:38:36.044337 1881 log.go:181] (0xc000b51400) (1) Data frame handling\nI0810 00:38:36.044357 1881 log.go:181] (0xc000b51400) (1) Data frame sent\nI0810 00:38:36.044372 1881 log.go:181] (0xc000c74fd0) (0xc000b51400) Stream removed, broadcasting: 1\nI0810 00:38:36.044395 1881 log.go:181] (0xc000c74fd0) Go away received\nI0810 00:38:36.044933 1881 log.go:181] (0xc000c74fd0) (0xc000b51400) Stream removed, broadcasting: 1\nI0810 00:38:36.044968 1881 log.go:181] (0xc000c74fd0) (0xc0008f7b80) Stream removed, broadcasting: 3\nI0810 00:38:36.044985 1881 log.go:181] (0xc000c74fd0) (0xc000728000) Stream removed, broadcasting: 5\n" Aug 10 00:38:36.050: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 00:38:36.050: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 10 00:38:36.050: INFO: Waiting for statefulset status.replicas updated to 0 Aug 10 00:38:36.054: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Aug 10 00:38:46.066: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 10 00:38:46.066: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 10 00:38:46.066: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 10 00:38:46.119: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 00:38:46.119: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:37:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:37:53 +0000 UTC }] Aug 10 00:38:46.119: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC }] Aug 10 00:38:46.119: INFO: ss-2 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC }] Aug 10 00:38:46.119: INFO: Aug 10 00:38:46.119: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 10 00:38:47.329: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 00:38:47.329: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:37:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:37:53 +0000 UTC }] Aug 10 00:38:47.329: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC }] Aug 10 00:38:47.329: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC }] Aug 10 00:38:47.329: INFO: Aug 10 00:38:47.329: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 10 00:38:48.335: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 00:38:48.335: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:37:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:37:53 +0000 UTC }] Aug 10 00:38:48.335: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC }] Aug 10 00:38:48.335: INFO: ss-2 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC }] Aug 10 00:38:48.335: INFO: Aug 10 00:38:48.335: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 10 00:38:49.340: INFO: POD NODE PHASE GRACE CONDITIONS Aug 10 00:38:49.340: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:37:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:37:53 +0000 UTC }] Aug 10 00:38:49.340: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC }] Aug 10 00:38:49.340: INFO: ss-2 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-10 00:38:14 +0000 UTC }] Aug 10 00:38:49.340: INFO: Aug 10 00:38:49.340: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 10 00:38:50.346: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.73239836s Aug 10 00:38:51.349: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.727066726s Aug 10 00:38:52.352: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.723757536s Aug 10 00:38:53.362: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.720792895s Aug 10 00:38:54.365: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.710870407s Aug 10 00:38:55.369: INFO: Verifying statefulset ss doesn't scale past 0 for another 707.399629ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1184 Aug 10 00:38:56.372: INFO: Scaling statefulset ss to 0 Aug 10 00:38:56.382: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 10 00:38:56.385: INFO: Deleting all statefulset in ns statefulset-1184 Aug 10 00:38:56.387: INFO: Scaling statefulset ss to 0 Aug 10 00:38:56.394: INFO: Waiting for statefulset status.replicas updated to 0 Aug 10 00:38:56.396: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:38:56.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1184" for this suite. • [SLOW TEST:62.948 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":213,"skipped":3490,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:38:56.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Aug 10 00:38:56.516: INFO: namespace kubectl-9446 Aug 10 00:38:56.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9446' Aug 10 00:38:56.857: INFO: stderr: "" Aug 10 00:38:56.857: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 10 00:38:57.861: INFO: Selector matched 1 pods for map[app:agnhost] Aug 10 00:38:57.861: INFO: Found 0 / 1 Aug 10 00:38:58.861: INFO: Selector matched 1 pods for map[app:agnhost] Aug 10 00:38:58.861: INFO: Found 0 / 1 Aug 10 00:38:59.862: INFO: Selector matched 1 pods for map[app:agnhost] Aug 10 00:38:59.862: INFO: Found 0 / 1 Aug 10 00:39:00.861: INFO: Selector matched 1 pods for map[app:agnhost] Aug 10 00:39:00.862: INFO: Found 1 / 1 Aug 10 00:39:00.862: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 10 00:39:00.865: INFO: Selector matched 1 pods for map[app:agnhost] Aug 10 00:39:00.865: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 10 00:39:00.865: INFO: wait on agnhost-primary startup in kubectl-9446 Aug 10 00:39:00.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config logs agnhost-primary-wpgq4 agnhost-primary --namespace=kubectl-9446' Aug 10 00:39:01.046: INFO: stderr: "" Aug 10 00:39:01.046: INFO: stdout: "Paused\n" STEP: exposing RC Aug 10 00:39:01.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9446' Aug 10 00:39:01.193: INFO: stderr: "" Aug 10 00:39:01.193: INFO: stdout: "service/rm2 exposed\n" Aug 10 00:39:01.200: INFO: Service rm2 in namespace kubectl-9446 found. STEP: exposing service Aug 10 00:39:03.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9446' Aug 10 00:39:03.376: INFO: stderr: "" Aug 10 00:39:03.376: INFO: stdout: "service/rm3 exposed\n" Aug 10 00:39:03.411: INFO: Service rm3 in namespace kubectl-9446 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:39:05.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9446" for this suite. • [SLOW TEST:9.002 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":214,"skipped":3500,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:39:05.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Aug 10 00:39:06.093: INFO: created pod pod-service-account-defaultsa Aug 10 00:39:06.093: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 10 00:39:06.115: INFO: created pod pod-service-account-mountsa Aug 10 00:39:06.115: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 10 00:39:06.196: INFO: created pod pod-service-account-nomountsa Aug 10 00:39:06.196: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 10 00:39:06.202: INFO: created pod pod-service-account-defaultsa-mountspec Aug 10 00:39:06.202: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 10 00:39:06.213: INFO: created pod pod-service-account-mountsa-mountspec Aug 10 00:39:06.213: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 10 00:39:06.253: INFO: created pod pod-service-account-nomountsa-mountspec Aug 10 00:39:06.254: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 10 00:39:06.277: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 10 00:39:06.277: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 10 00:39:06.341: INFO: created pod pod-service-account-mountsa-nomountspec Aug 10 00:39:06.341: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 10 00:39:06.372: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 10 00:39:06.372: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:39:06.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5094" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":215,"skipped":3522,"failed":0} SSSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:39:06.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Aug 10 00:39:06.649: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:39:06.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4697" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":216,"skipped":3527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:39:06.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 10 00:39:06.804: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 10 00:39:06.835: INFO: Waiting for terminating namespaces to be deleted... Aug 10 00:39:06.837: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 10 00:39:06.843: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.843: INFO: Container coredns ready: true, restart count 0 Aug 10 00:39:06.843: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.843: INFO: Container coredns ready: true, restart count 0 Aug 10 00:39:06.843: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.843: INFO: Container kindnet-cni ready: true, restart count 0 Aug 10 00:39:06.843: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.843: INFO: Container kube-proxy ready: true, restart count 0 Aug 10 00:39:06.843: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.843: INFO: Container local-path-provisioner ready: true, restart count 0 Aug 10 00:39:06.843: INFO: pod-service-account-defaultsa-mountspec from svcaccounts-5094 started at 2020-08-10 00:39:06 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.843: INFO: Container token-test ready: false, restart count 0 Aug 10 00:39:06.843: INFO: pod-service-account-defaultsa-nomountspec from svcaccounts-5094 started at 2020-08-10 00:39:06 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.843: INFO: Container token-test ready: false, restart count 0 Aug 10 00:39:06.843: INFO: pod-service-account-nomountsa-mountspec from svcaccounts-5094 started at 2020-08-10 00:39:06 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.843: INFO: Container token-test ready: false, restart count 0 Aug 10 00:39:06.843: INFO: pod-service-account-nomountsa-nomountspec from svcaccounts-5094 started at 2020-08-10 00:39:06 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.843: INFO: Container token-test ready: false, restart count 0 Aug 10 00:39:06.843: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 10 00:39:06.848: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.848: INFO: Container kindnet-cni ready: true, restart count 0 Aug 10 00:39:06.848: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.848: INFO: Container kube-proxy ready: true, restart count 0 Aug 10 00:39:06.848: INFO: agnhost-primary-wpgq4 from kubectl-9446 started at 2020-08-10 00:38:56 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.848: INFO: Container agnhost-primary ready: true, restart count 0 Aug 10 00:39:06.848: INFO: pod-service-account-defaultsa from svcaccounts-5094 started at 2020-08-10 00:39:06 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.848: INFO: Container token-test ready: false, restart count 0 Aug 10 00:39:06.848: INFO: pod-service-account-mountsa from svcaccounts-5094 started at 2020-08-10 00:39:06 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.848: INFO: Container token-test ready: false, restart count 0 Aug 10 00:39:06.848: INFO: pod-service-account-mountsa-mountspec from svcaccounts-5094 started at 2020-08-10 00:39:06 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.848: INFO: Container token-test ready: false, restart count 0 Aug 10 00:39:06.848: INFO: pod-service-account-mountsa-nomountspec from svcaccounts-5094 started at 2020-08-10 00:39:06 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.848: INFO: Container token-test ready: false, restart count 0 Aug 10 00:39:06.848: INFO: pod-service-account-nomountsa from svcaccounts-5094 started at 2020-08-10 00:39:06 +0000 UTC (1 container statuses recorded) Aug 10 00:39:06.848: INFO: Container token-test ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-bbd40c8d-8924-4c9e-80b9-edab4b411c79 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-bbd40c8d-8924-4c9e-80b9-edab4b411c79 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-bbd40c8d-8924-4c9e-80b9-edab4b411c79 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:39:35.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5021" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:29.059 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":217,"skipped":3567,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:39:35.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 10 00:39:35.962: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 10 00:39:36.009: INFO: Waiting for terminating namespaces to be deleted... Aug 10 00:39:36.036: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 10 00:39:36.056: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 10 00:39:36.056: INFO: Container coredns ready: true, restart count 0 Aug 10 00:39:36.056: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Aug 10 00:39:36.056: INFO: Container coredns ready: true, restart count 0 Aug 10 00:39:36.056: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 10 00:39:36.056: INFO: Container kindnet-cni ready: true, restart count 0 Aug 10 00:39:36.056: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 10 00:39:36.056: INFO: Container kube-proxy ready: true, restart count 0 Aug 10 00:39:36.056: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 10 00:39:36.056: INFO: Container local-path-provisioner ready: true, restart count 0 Aug 10 00:39:36.056: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 10 00:39:36.061: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 10 00:39:36.061: INFO: Container kindnet-cni ready: true, restart count 0 Aug 10 00:39:36.061: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 10 00:39:36.061: INFO: Container kube-proxy ready: true, restart count 0 Aug 10 00:39:36.061: INFO: pod1 from sched-pred-5021 started at 2020-08-10 00:39:19 +0000 UTC (1 container statuses recorded) Aug 10 00:39:36.061: INFO: Container pod1 ready: true, restart count 0 Aug 10 00:39:36.061: INFO: pod2 from sched-pred-5021 started at 2020-08-10 00:39:27 +0000 UTC (1 container statuses recorded) Aug 10 00:39:36.061: INFO: Container pod2 ready: true, restart count 0 Aug 10 00:39:36.061: INFO: pod3 from sched-pred-5021 started at 2020-08-10 00:39:31 +0000 UTC (1 container statuses recorded) Aug 10 00:39:36.061: INFO: Container pod3 ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Aug 10 00:39:36.157: INFO: Pod coredns-f9fd979d6-s745j requesting resource cpu=100m on Node latest-worker Aug 10 00:39:36.157: INFO: Pod coredns-f9fd979d6-zs4sj requesting resource cpu=100m on Node latest-worker Aug 10 00:39:36.157: INFO: Pod kindnet-46dnt requesting resource cpu=100m on Node latest-worker Aug 10 00:39:36.157: INFO: Pod kindnet-g6zbt requesting resource cpu=100m on Node latest-worker2 Aug 10 00:39:36.157: INFO: Pod kube-proxy-nsnzn requesting resource cpu=0m on Node latest-worker2 Aug 10 00:39:36.157: INFO: Pod kube-proxy-sxpg9 requesting resource cpu=0m on Node latest-worker Aug 10 00:39:36.157: INFO: Pod local-path-provisioner-8b46957d4-2gzpd requesting resource cpu=0m on Node latest-worker Aug 10 00:39:36.157: INFO: Pod pod1 requesting resource cpu=0m on Node latest-worker2 Aug 10 00:39:36.157: INFO: Pod pod2 requesting resource cpu=0m on Node latest-worker2 Aug 10 00:39:36.157: INFO: Pod pod3 requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Aug 10 00:39:36.157: INFO: Creating a pod which consumes cpu=10990m on Node latest-worker Aug 10 00:39:36.164: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-d697816e-064f-48d1-817b-bb4b957a11c3.1629c134fbd7056e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-326/filler-pod-d697816e-064f-48d1-817b-bb4b957a11c3 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-d697816e-064f-48d1-817b-bb4b957a11c3.1629c135e6dd1857], Reason = [Created], Message = [Created container filler-pod-d697816e-064f-48d1-817b-bb4b957a11c3] STEP: Considering event: Type = [Normal], Name = [filler-pod-72146b42-b9d3-4c78-a9b5-edda799d4432.1629c1354d4fae42], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d697816e-064f-48d1-817b-bb4b957a11c3.1629c135f87a0e53], Reason = [Started], Message = [Started container filler-pod-d697816e-064f-48d1-817b-bb4b957a11c3] STEP: Considering event: Type = [Normal], Name = [filler-pod-72146b42-b9d3-4c78-a9b5-edda799d4432.1629c135b8eb98cd], Reason = [Created], Message = [Created container filler-pod-72146b42-b9d3-4c78-a9b5-edda799d4432] STEP: Considering event: Type = [Normal], Name = [filler-pod-72146b42-b9d3-4c78-a9b5-edda799d4432.1629c135e04a9bf0], Reason = [Started], Message = [Started container filler-pod-72146b42-b9d3-4c78-a9b5-edda799d4432] STEP: Considering event: Type = [Normal], Name = [filler-pod-d697816e-064f-48d1-817b-bb4b957a11c3.1629c1355cc238c8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-72146b42-b9d3-4c78-a9b5-edda799d4432.1629c134faebe443], Reason = [Scheduled], Message = [Successfully assigned sched-pred-326/filler-pod-72146b42-b9d3-4c78-a9b5-edda799d4432 to latest-worker] STEP: Considering event: Type = [Warning], Name = [additional-pod.1629c1366aded543], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.1629c1366d83ef63], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:39:43.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-326" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.745 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":218,"skipped":3579,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:39:43.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:39:43.633: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Aug 10 00:39:47.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3112 create -f -' Aug 10 00:39:51.961: INFO: stderr: "" Aug 10 00:39:51.961: INFO: stdout: "e2e-test-crd-publish-openapi-2096-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 10 00:39:51.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3112 delete e2e-test-crd-publish-openapi-2096-crds test-foo' Aug 10 00:39:52.075: INFO: stderr: "" Aug 10 00:39:52.075: INFO: stdout: "e2e-test-crd-publish-openapi-2096-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Aug 10 00:39:52.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3112 apply -f -' Aug 10 00:39:52.361: INFO: stderr: "" Aug 10 00:39:52.361: INFO: stdout: "e2e-test-crd-publish-openapi-2096-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 10 00:39:52.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3112 delete e2e-test-crd-publish-openapi-2096-crds test-foo' Aug 10 00:39:52.500: INFO: stderr: "" Aug 10 00:39:52.500: INFO: stdout: "e2e-test-crd-publish-openapi-2096-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Aug 10 00:39:52.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3112 create -f -' Aug 10 00:39:52.760: INFO: rc: 1 Aug 10 00:39:52.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3112 apply -f -' Aug 10 00:39:53.066: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Aug 10 00:39:53.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3112 create -f -' Aug 10 00:39:53.356: INFO: rc: 1 Aug 10 00:39:53.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3112 apply -f -' Aug 10 00:39:53.605: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Aug 10 00:39:53.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2096-crds' Aug 10 00:39:53.923: INFO: stderr: "" Aug 10 00:39:53.923: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2096-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Aug 10 00:39:53.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2096-crds.metadata' Aug 10 00:39:54.272: INFO: stderr: "" Aug 10 00:39:54.272: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2096-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Aug 10 00:39:54.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2096-crds.spec' Aug 10 00:39:54.579: INFO: stderr: "" Aug 10 00:39:54.579: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2096-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Aug 10 00:39:54.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2096-crds.spec.bars' Aug 10 00:39:54.857: INFO: stderr: "" Aug 10 00:39:54.857: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2096-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Aug 10 00:39:54.857: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2096-crds.spec.bars2' Aug 10 00:39:55.154: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:39:57.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3112" for this suite. • [SLOW TEST:13.617 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":219,"skipped":3580,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:39:57.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Aug 10 00:39:57.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config api-versions' Aug 10 00:39:57.497: INFO: stderr: "" Aug 10 00:39:57.497: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:39:57.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7290" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":220,"skipped":3612,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:39:57.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Aug 10 00:39:57.600: INFO: Waiting up to 5m0s for pod "downward-api-faf3720f-6d9e-485b-a2f5-2872a06f181f" in namespace "downward-api-2300" to be "Succeeded or Failed" Aug 10 00:39:57.623: INFO: Pod "downward-api-faf3720f-6d9e-485b-a2f5-2872a06f181f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.135717ms Aug 10 00:39:59.628: INFO: Pod "downward-api-faf3720f-6d9e-485b-a2f5-2872a06f181f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028584419s Aug 10 00:40:01.633: INFO: Pod "downward-api-faf3720f-6d9e-485b-a2f5-2872a06f181f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033329746s STEP: Saw pod success Aug 10 00:40:01.633: INFO: Pod "downward-api-faf3720f-6d9e-485b-a2f5-2872a06f181f" satisfied condition "Succeeded or Failed" Aug 10 00:40:01.637: INFO: Trying to get logs from node latest-worker2 pod downward-api-faf3720f-6d9e-485b-a2f5-2872a06f181f container dapi-container: STEP: delete the pod Aug 10 00:40:01.717: INFO: Waiting for pod downward-api-faf3720f-6d9e-485b-a2f5-2872a06f181f to disappear Aug 10 00:40:01.731: INFO: Pod downward-api-faf3720f-6d9e-485b-a2f5-2872a06f181f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:40:01.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2300" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":221,"skipped":3618,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:40:01.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:40:01.864: INFO: Waiting up to 5m0s for pod "busybox-user-65534-1b6e2e23-3e6b-4e57-bb33-4f225f2c39e7" in namespace "security-context-test-2280" to be "Succeeded or Failed" Aug 10 00:40:01.905: INFO: Pod "busybox-user-65534-1b6e2e23-3e6b-4e57-bb33-4f225f2c39e7": Phase="Pending", Reason="", readiness=false. Elapsed: 40.861644ms Aug 10 00:40:03.909: INFO: Pod "busybox-user-65534-1b6e2e23-3e6b-4e57-bb33-4f225f2c39e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045260079s Aug 10 00:40:05.913: INFO: Pod "busybox-user-65534-1b6e2e23-3e6b-4e57-bb33-4f225f2c39e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048843785s Aug 10 00:40:05.913: INFO: Pod "busybox-user-65534-1b6e2e23-3e6b-4e57-bb33-4f225f2c39e7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:40:05.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2280" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":222,"skipped":3634,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:40:05.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 00:40:06.037: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0acc7a39-c8fd-4835-80f5-f9fa9d5b71ef" in namespace "downward-api-9947" to be "Succeeded or Failed" Aug 10 00:40:06.057: INFO: Pod "downwardapi-volume-0acc7a39-c8fd-4835-80f5-f9fa9d5b71ef": Phase="Pending", Reason="", readiness=false. Elapsed: 20.393395ms Aug 10 00:40:08.107: INFO: Pod "downwardapi-volume-0acc7a39-c8fd-4835-80f5-f9fa9d5b71ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070722268s Aug 10 00:40:10.112: INFO: Pod "downwardapi-volume-0acc7a39-c8fd-4835-80f5-f9fa9d5b71ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075480569s STEP: Saw pod success Aug 10 00:40:10.112: INFO: Pod "downwardapi-volume-0acc7a39-c8fd-4835-80f5-f9fa9d5b71ef" satisfied condition "Succeeded or Failed" Aug 10 00:40:10.115: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-0acc7a39-c8fd-4835-80f5-f9fa9d5b71ef container client-container: STEP: delete the pod Aug 10 00:40:10.184: INFO: Waiting for pod downwardapi-volume-0acc7a39-c8fd-4835-80f5-f9fa9d5b71ef to disappear Aug 10 00:40:10.186: INFO: Pod downwardapi-volume-0acc7a39-c8fd-4835-80f5-f9fa9d5b71ef no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:40:10.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9947" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":223,"skipped":3640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:40:10.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-9cc9954a-600d-4f1c-b1f4-d6b95bdb0970 STEP: Creating a pod to test consume secrets Aug 10 00:40:10.284: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2bc5e77b-b3a9-4966-b2f7-a5ac79872bfc" in namespace "projected-644" to be "Succeeded or Failed" Aug 10 00:40:10.347: INFO: Pod "pod-projected-secrets-2bc5e77b-b3a9-4966-b2f7-a5ac79872bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 62.861437ms Aug 10 00:40:12.350: INFO: Pod "pod-projected-secrets-2bc5e77b-b3a9-4966-b2f7-a5ac79872bfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066501681s Aug 10 00:40:14.365: INFO: Pod "pod-projected-secrets-2bc5e77b-b3a9-4966-b2f7-a5ac79872bfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080963994s STEP: Saw pod success Aug 10 00:40:14.365: INFO: Pod "pod-projected-secrets-2bc5e77b-b3a9-4966-b2f7-a5ac79872bfc" satisfied condition "Succeeded or Failed" Aug 10 00:40:14.367: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-2bc5e77b-b3a9-4966-b2f7-a5ac79872bfc container projected-secret-volume-test: STEP: delete the pod Aug 10 00:40:14.386: INFO: Waiting for pod pod-projected-secrets-2bc5e77b-b3a9-4966-b2f7-a5ac79872bfc to disappear Aug 10 00:40:14.402: INFO: Pod pod-projected-secrets-2bc5e77b-b3a9-4966-b2f7-a5ac79872bfc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:40:14.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-644" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":224,"skipped":3695,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:40:14.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:40:14.780: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 10 00:40:19.784: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 10 00:40:19.784: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 10 00:40:19.841: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9029 /apis/apps/v1/namespaces/deployment-9029/deployments/test-cleanup-deployment 6452f07e-2b6d-433d-9e52-523ded4cf75c 5792299 1 2020-08-10 00:40:19 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-08-10 00:40:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004828318 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Aug 10 00:40:19.899: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-9029 /apis/apps/v1/namespaces/deployment-9029/replicasets/test-cleanup-deployment-5d446bdd47 a2fbf77b-9841-4a81-adb0-ed705e07f1f2 5792301 1 2020-08-10 00:40:19 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 6452f07e-2b6d-433d-9e52-523ded4cf75c 0xc0048c8257 0xc0048c8258}] [] [{kube-controller-manager Update apps/v1 2020-08-10 00:40:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6452f07e-2b6d-433d-9e52-523ded4cf75c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0048c82e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 10 00:40:19.899: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Aug 10 00:40:19.899: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9029 /apis/apps/v1/namespaces/deployment-9029/replicasets/test-cleanup-controller 5bfc7cd2-687b-4313-a52d-31ca2fb5be18 5792300 1 2020-08-10 00:40:14 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 6452f07e-2b6d-433d-9e52-523ded4cf75c 0xc0048c811f 0xc0048c8130}] [] [{e2e.test Update apps/v1 2020-08-10 00:40:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-10 00:40:19 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"6452f07e-2b6d-433d-9e52-523ded4cf75c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0048c81e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 10 00:40:19.934: INFO: Pod "test-cleanup-controller-9zn6j" is available: &Pod{ObjectMeta:{test-cleanup-controller-9zn6j test-cleanup-controller- deployment-9029 /api/v1/namespaces/deployment-9029/pods/test-cleanup-controller-9zn6j 137b6016-cc55-426d-b53a-9895bebc2065 5792284 0 2020-08-10 00:40:14 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 5bfc7cd2-687b-4313-a52d-31ca2fb5be18 0xc00486a377 0xc00486a378}] [] [{kube-controller-manager Update v1 2020-08-10 00:40:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5bfc7cd2-687b-4313-a52d-31ca2fb5be18\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-10 00:40:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.67\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-g9ct9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-g9ct9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-g9ct9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:40:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:40:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:40:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:40:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.67,StartTime:2020-08-10 00:40:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-10 00:40:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://42669863a246d31313f95ba4d43b33ca9c0e18dbe1e2502f6c50e941ca0664d2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 10 00:40:19.935: INFO: Pod "test-cleanup-deployment-5d446bdd47-k2twm" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-k2twm test-cleanup-deployment-5d446bdd47- deployment-9029 /api/v1/namespaces/deployment-9029/pods/test-cleanup-deployment-5d446bdd47-k2twm a2dc3b64-0873-4700-936c-2968947ae98a 5792307 0 2020-08-10 00:40:19 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 a2fbf77b-9841-4a81-adb0-ed705e07f1f2 0xc00486a537 0xc00486a538}] [] [{kube-controller-manager Update v1 2020-08-10 00:40:19 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2fbf77b-9841-4a81-adb0-ed705e07f1f2\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-g9ct9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-g9ct9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-g9ct9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:40:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:40:19.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9029" for this suite. • [SLOW TEST:5.616 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":225,"skipped":3701,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:40:20.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-9549 STEP: creating service affinity-nodeport in namespace services-9549 STEP: creating replication controller affinity-nodeport in namespace services-9549 I0810 00:40:20.697030 8 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-9549, replica count: 3 I0810 00:40:23.747348 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 00:40:26.747582 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 00:40:29.747848 8 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 10 00:40:29.758: INFO: Creating new exec pod Aug 10 00:40:34.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9549 execpod-affinityrbx4f -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Aug 10 00:40:35.056: INFO: stderr: "I0810 00:40:34.994474 2221 log.go:181] (0xc00063efd0) (0xc0007305a0) Create stream\nI0810 00:40:34.994544 2221 log.go:181] (0xc00063efd0) (0xc0007305a0) Stream added, broadcasting: 1\nI0810 00:40:34.999824 2221 log.go:181] (0xc00063efd0) Reply frame received for 1\nI0810 00:40:34.999873 2221 log.go:181] (0xc00063efd0) (0xc000a9a820) Create stream\nI0810 00:40:34.999901 2221 log.go:181] (0xc00063efd0) (0xc000a9a820) Stream added, broadcasting: 3\nI0810 00:40:35.001119 2221 log.go:181] (0xc00063efd0) Reply frame received for 3\nI0810 00:40:35.001172 2221 log.go:181] (0xc00063efd0) (0xc00088a000) Create stream\nI0810 00:40:35.001189 2221 log.go:181] (0xc00063efd0) (0xc00088a000) Stream added, broadcasting: 5\nI0810 00:40:35.002053 2221 log.go:181] (0xc00063efd0) Reply frame received for 5\nI0810 00:40:35.048016 2221 log.go:181] (0xc00063efd0) Data frame received for 5\nI0810 00:40:35.048066 2221 log.go:181] (0xc00088a000) (5) Data frame handling\nI0810 00:40:35.048094 2221 log.go:181] (0xc00088a000) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI0810 00:40:35.048477 2221 log.go:181] (0xc00063efd0) Data frame received for 5\nI0810 00:40:35.048502 2221 log.go:181] (0xc00088a000) (5) Data frame handling\nI0810 00:40:35.048537 2221 log.go:181] (0xc00088a000) (5) Data frame sent\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI0810 00:40:35.048563 2221 log.go:181] (0xc00063efd0) Data frame received for 5\nI0810 00:40:35.048572 2221 log.go:181] (0xc00088a000) (5) Data frame handling\nI0810 00:40:35.048686 2221 log.go:181] (0xc00063efd0) Data frame received for 3\nI0810 00:40:35.048707 2221 log.go:181] (0xc000a9a820) (3) Data frame handling\nI0810 00:40:35.050189 2221 log.go:181] (0xc00063efd0) Data frame received for 1\nI0810 00:40:35.050218 2221 log.go:181] (0xc0007305a0) (1) Data frame handling\nI0810 00:40:35.050234 2221 log.go:181] (0xc0007305a0) (1) Data frame sent\nI0810 00:40:35.050257 2221 log.go:181] (0xc00063efd0) (0xc0007305a0) Stream removed, broadcasting: 1\nI0810 00:40:35.050271 2221 log.go:181] (0xc00063efd0) Go away received\nI0810 00:40:35.050699 2221 log.go:181] (0xc00063efd0) (0xc0007305a0) Stream removed, broadcasting: 1\nI0810 00:40:35.050725 2221 log.go:181] (0xc00063efd0) (0xc000a9a820) Stream removed, broadcasting: 3\nI0810 00:40:35.050735 2221 log.go:181] (0xc00063efd0) (0xc00088a000) Stream removed, broadcasting: 5\n" Aug 10 00:40:35.056: INFO: stdout: "" Aug 10 00:40:35.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9549 execpod-affinityrbx4f -- /bin/sh -x -c nc -zv -t -w 2 10.100.205.198 80' Aug 10 00:40:35.257: INFO: stderr: "I0810 00:40:35.198985 2240 log.go:181] (0xc000628d10) (0xc000ac5ea0) Create stream\nI0810 00:40:35.199048 2240 log.go:181] (0xc000628d10) (0xc000ac5ea0) Stream added, broadcasting: 1\nI0810 00:40:35.201554 2240 log.go:181] (0xc000628d10) Reply frame received for 1\nI0810 00:40:35.201605 2240 log.go:181] (0xc000628d10) (0xc0003145a0) Create stream\nI0810 00:40:35.201622 2240 log.go:181] (0xc000628d10) (0xc0003145a0) Stream added, broadcasting: 3\nI0810 00:40:35.202535 2240 log.go:181] (0xc000628d10) Reply frame received for 3\nI0810 00:40:35.202571 2240 log.go:181] (0xc000628d10) (0xc000d10500) Create stream\nI0810 00:40:35.202594 2240 log.go:181] (0xc000628d10) (0xc000d10500) Stream added, broadcasting: 5\nI0810 00:40:35.203383 2240 log.go:181] (0xc000628d10) Reply frame received for 5\nI0810 00:40:35.249224 2240 log.go:181] (0xc000628d10) Data frame received for 3\nI0810 00:40:35.249608 2240 log.go:181] (0xc000628d10) Data frame received for 5\nI0810 00:40:35.249641 2240 log.go:181] (0xc000d10500) (5) Data frame handling\nI0810 00:40:35.249661 2240 log.go:181] (0xc000d10500) (5) Data frame sent\nI0810 00:40:35.249686 2240 log.go:181] (0xc000628d10) Data frame received for 5\nI0810 00:40:35.249701 2240 log.go:181] (0xc000d10500) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.205.198 80\nConnection to 10.100.205.198 80 port [tcp/http] succeeded!\nI0810 00:40:35.250062 2240 log.go:181] (0xc0003145a0) (3) Data frame handling\nI0810 00:40:35.251336 2240 log.go:181] (0xc000628d10) Data frame received for 1\nI0810 00:40:35.251416 2240 log.go:181] (0xc000ac5ea0) (1) Data frame handling\nI0810 00:40:35.251454 2240 log.go:181] (0xc000ac5ea0) (1) Data frame sent\nI0810 00:40:35.252159 2240 log.go:181] (0xc000628d10) (0xc000ac5ea0) Stream removed, broadcasting: 1\nI0810 00:40:35.252219 2240 log.go:181] (0xc000628d10) Go away received\nI0810 00:40:35.252425 2240 log.go:181] (0xc000628d10) (0xc000ac5ea0) Stream removed, broadcasting: 1\nI0810 00:40:35.252443 2240 log.go:181] (0xc000628d10) (0xc0003145a0) Stream removed, broadcasting: 3\nI0810 00:40:35.252449 2240 log.go:181] (0xc000628d10) (0xc000d10500) Stream removed, broadcasting: 5\n" Aug 10 00:40:35.257: INFO: stdout: "" Aug 10 00:40:35.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9549 execpod-affinityrbx4f -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.14 31578' Aug 10 00:40:35.429: INFO: stderr: "I0810 00:40:35.366633 2258 log.go:181] (0xc0007d1970) (0xc0009ac640) Create stream\nI0810 00:40:35.366695 2258 log.go:181] (0xc0007d1970) (0xc0009ac640) Stream added, broadcasting: 1\nI0810 00:40:35.370515 2258 log.go:181] (0xc0007d1970) Reply frame received for 1\nI0810 00:40:35.370582 2258 log.go:181] (0xc0007d1970) (0xc0009768c0) Create stream\nI0810 00:40:35.370597 2258 log.go:181] (0xc0007d1970) (0xc0009768c0) Stream added, broadcasting: 3\nI0810 00:40:35.371535 2258 log.go:181] (0xc0007d1970) Reply frame received for 3\nI0810 00:40:35.371584 2258 log.go:181] (0xc0007d1970) (0xc0009680a0) Create stream\nI0810 00:40:35.371600 2258 log.go:181] (0xc0007d1970) (0xc0009680a0) Stream added, broadcasting: 5\nI0810 00:40:35.372316 2258 log.go:181] (0xc0007d1970) Reply frame received for 5\nI0810 00:40:35.422018 2258 log.go:181] (0xc0007d1970) Data frame received for 5\nI0810 00:40:35.422056 2258 log.go:181] (0xc0009680a0) (5) Data frame handling\nI0810 00:40:35.422086 2258 log.go:181] (0xc0009680a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.14 31578\nI0810 00:40:35.422399 2258 log.go:181] (0xc0007d1970) Data frame received for 5\nI0810 00:40:35.422424 2258 log.go:181] (0xc0009680a0) (5) Data frame handling\nI0810 00:40:35.422448 2258 log.go:181] (0xc0009680a0) (5) Data frame sent\nConnection to 172.18.0.14 31578 port [tcp/31578] succeeded!\nI0810 00:40:35.422606 2258 log.go:181] (0xc0007d1970) Data frame received for 5\nI0810 00:40:35.422631 2258 log.go:181] (0xc0009680a0) (5) Data frame handling\nI0810 00:40:35.422908 2258 log.go:181] (0xc0007d1970) Data frame received for 3\nI0810 00:40:35.422922 2258 log.go:181] (0xc0009768c0) (3) Data frame handling\nI0810 00:40:35.424052 2258 log.go:181] (0xc0007d1970) Data frame received for 1\nI0810 00:40:35.424084 2258 log.go:181] (0xc0009ac640) (1) Data frame handling\nI0810 00:40:35.424102 2258 log.go:181] (0xc0009ac640) (1) Data frame sent\nI0810 00:40:35.424117 2258 log.go:181] (0xc0007d1970) (0xc0009ac640) Stream removed, broadcasting: 1\nI0810 00:40:35.424143 2258 log.go:181] (0xc0007d1970) Go away received\nI0810 00:40:35.424422 2258 log.go:181] (0xc0007d1970) (0xc0009ac640) Stream removed, broadcasting: 1\nI0810 00:40:35.424436 2258 log.go:181] (0xc0007d1970) (0xc0009768c0) Stream removed, broadcasting: 3\nI0810 00:40:35.424442 2258 log.go:181] (0xc0007d1970) (0xc0009680a0) Stream removed, broadcasting: 5\n" Aug 10 00:40:35.429: INFO: stdout: "" Aug 10 00:40:35.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9549 execpod-affinityrbx4f -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31578' Aug 10 00:40:35.647: INFO: stderr: "I0810 00:40:35.561871 2276 log.go:181] (0xc00072b810) (0xc0007d66e0) Create stream\nI0810 00:40:35.561936 2276 log.go:181] (0xc00072b810) (0xc0007d66e0) Stream added, broadcasting: 1\nI0810 00:40:35.567250 2276 log.go:181] (0xc00072b810) Reply frame received for 1\nI0810 00:40:35.567308 2276 log.go:181] (0xc00072b810) (0xc000cb10e0) Create stream\nI0810 00:40:35.567333 2276 log.go:181] (0xc00072b810) (0xc000cb10e0) Stream added, broadcasting: 3\nI0810 00:40:35.568283 2276 log.go:181] (0xc00072b810) Reply frame received for 3\nI0810 00:40:35.568335 2276 log.go:181] (0xc00072b810) (0xc000cac3c0) Create stream\nI0810 00:40:35.568359 2276 log.go:181] (0xc00072b810) (0xc000cac3c0) Stream added, broadcasting: 5\nI0810 00:40:35.569224 2276 log.go:181] (0xc00072b810) Reply frame received for 5\nI0810 00:40:35.639664 2276 log.go:181] (0xc00072b810) Data frame received for 5\nI0810 00:40:35.639704 2276 log.go:181] (0xc000cac3c0) (5) Data frame handling\nI0810 00:40:35.639719 2276 log.go:181] (0xc000cac3c0) (5) Data frame sent\nI0810 00:40:35.639728 2276 log.go:181] (0xc00072b810) Data frame received for 5\nI0810 00:40:35.639737 2276 log.go:181] (0xc000cac3c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 31578\nConnection to 172.18.0.12 31578 port [tcp/31578] succeeded!\nI0810 00:40:35.639762 2276 log.go:181] (0xc00072b810) Data frame received for 3\nI0810 00:40:35.639779 2276 log.go:181] (0xc000cb10e0) (3) Data frame handling\nI0810 00:40:35.641088 2276 log.go:181] (0xc00072b810) Data frame received for 1\nI0810 00:40:35.641104 2276 log.go:181] (0xc0007d66e0) (1) Data frame handling\nI0810 00:40:35.641110 2276 log.go:181] (0xc0007d66e0) (1) Data frame sent\nI0810 00:40:35.641624 2276 log.go:181] (0xc00072b810) (0xc0007d66e0) Stream removed, broadcasting: 1\nI0810 00:40:35.641661 2276 log.go:181] (0xc00072b810) Go away received\nI0810 00:40:35.642175 2276 log.go:181] (0xc00072b810) (0xc0007d66e0) Stream removed, broadcasting: 1\nI0810 00:40:35.642200 2276 log.go:181] (0xc00072b810) (0xc000cb10e0) Stream removed, broadcasting: 3\nI0810 00:40:35.642212 2276 log.go:181] (0xc00072b810) (0xc000cac3c0) Stream removed, broadcasting: 5\n" Aug 10 00:40:35.647: INFO: stdout: "" Aug 10 00:40:35.647: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-9549 execpod-affinityrbx4f -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.14:31578/ ; done' Aug 10 00:40:35.934: INFO: stderr: "I0810 00:40:35.769684 2294 log.go:181] (0xc000a142c0) (0xc000a27ae0) Create stream\nI0810 00:40:35.769743 2294 log.go:181] (0xc000a142c0) (0xc000a27ae0) Stream added, broadcasting: 1\nI0810 00:40:35.772624 2294 log.go:181] (0xc000a142c0) Reply frame received for 1\nI0810 00:40:35.772708 2294 log.go:181] (0xc000a142c0) (0xc00083ed20) Create stream\nI0810 00:40:35.772869 2294 log.go:181] (0xc000a142c0) (0xc00083ed20) Stream added, broadcasting: 3\nI0810 00:40:35.775254 2294 log.go:181] (0xc000a142c0) Reply frame received for 3\nI0810 00:40:35.775293 2294 log.go:181] (0xc000a142c0) (0xc0007d8aa0) Create stream\nI0810 00:40:35.775311 2294 log.go:181] (0xc000a142c0) (0xc0007d8aa0) Stream added, broadcasting: 5\nI0810 00:40:35.776085 2294 log.go:181] (0xc000a142c0) Reply frame received for 5\nI0810 00:40:35.825481 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.825517 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.825529 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.825552 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.825561 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.825571 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.829687 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.829709 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.829737 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.830313 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.830337 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.830345 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.830363 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.830398 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.830451 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.838204 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.838244 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.838280 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.838993 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.839068 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.839105 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\nI0810 00:40:35.839121 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.839133 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.839162 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.839194 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.839228 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\nI0810 00:40:35.839249 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.842884 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.842909 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.842937 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.843234 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.843257 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.843274 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.843328 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.843341 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.843360 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.847774 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.847800 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.847817 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.848516 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.848534 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.848540 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.848629 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.848656 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.848679 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.854569 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.854588 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.854617 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.855361 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.855394 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.855418 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.855444 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.855461 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.855487 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\nI0810 00:40:35.855510 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.855526 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.855556 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\nI0810 00:40:35.859820 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.859839 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.859866 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.860250 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.860263 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.860271 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.860287 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.860297 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.860305 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.866329 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.866352 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.866383 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.866731 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.866766 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.866778 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.866791 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.866799 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.866807 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.873604 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.873637 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.873649 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.873663 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.873670 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.873678 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.873685 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.873692 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.873708 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.878088 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.878110 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.878123 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.878594 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.878609 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.878617 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.878627 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.878633 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.878640 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.885179 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.885194 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.885208 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.885768 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.885779 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.885785 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.885844 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.885858 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.885871 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.891246 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.891259 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.891273 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.891683 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.891700 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.891716 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.891867 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.891890 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.891913 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.897777 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.897809 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.897835 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.898492 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.898578 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.898596 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.898613 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.898629 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.898645 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.904516 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.904530 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.904538 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.904968 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.904983 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.904994 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.905011 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.905036 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.905052 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.910955 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.910995 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.911025 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.911449 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.911478 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.911495 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.911517 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.911541 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.911565 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.918529 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.918559 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.918574 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.921177 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.921205 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.921215 2294 log.go:181] (0xc0007d8aa0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.14:31578/\nI0810 00:40:35.921232 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.921249 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.921257 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.925177 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.925192 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.925205 2294 log.go:181] (0xc00083ed20) (3) Data frame sent\nI0810 00:40:35.925939 2294 log.go:181] (0xc000a142c0) Data frame received for 5\nI0810 00:40:35.925968 2294 log.go:181] (0xc0007d8aa0) (5) Data frame handling\nI0810 00:40:35.925992 2294 log.go:181] (0xc000a142c0) Data frame received for 3\nI0810 00:40:35.926017 2294 log.go:181] (0xc00083ed20) (3) Data frame handling\nI0810 00:40:35.927464 2294 log.go:181] (0xc000a142c0) Data frame received for 1\nI0810 00:40:35.927486 2294 log.go:181] (0xc000a27ae0) (1) Data frame handling\nI0810 00:40:35.927494 2294 log.go:181] (0xc000a27ae0) (1) Data frame sent\nI0810 00:40:35.927510 2294 log.go:181] (0xc000a142c0) (0xc000a27ae0) Stream removed, broadcasting: 1\nI0810 00:40:35.927535 2294 log.go:181] (0xc000a142c0) Go away received\nI0810 00:40:35.927877 2294 log.go:181] (0xc000a142c0) (0xc000a27ae0) Stream removed, broadcasting: 1\nI0810 00:40:35.927904 2294 log.go:181] (0xc000a142c0) (0xc00083ed20) Stream removed, broadcasting: 3\nI0810 00:40:35.927914 2294 log.go:181] (0xc000a142c0) (0xc0007d8aa0) Stream removed, broadcasting: 5\n" Aug 10 00:40:35.934: INFO: stdout: "\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh\naffinity-nodeport-v98rh" Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Received response from host: affinity-nodeport-v98rh Aug 10 00:40:35.934: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-9549, will wait for the garbage collector to delete the pods Aug 10 00:40:36.100: INFO: Deleting ReplicationController affinity-nodeport took: 5.524572ms Aug 10 00:40:36.600: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.235893ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:40:53.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9549" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:33.921 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":226,"skipped":3717,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:40:53.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:40:54.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5988" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":227,"skipped":3730,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:40:54.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4383 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4383 I0810 00:40:54.326623 8 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4383, replica count: 2 I0810 00:40:57.377066 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 00:41:00.377308 8 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 10 00:41:00.377: INFO: Creating new exec pod Aug 10 00:41:05.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-4383 execpod6wmpj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 10 00:41:05.627: INFO: stderr: "I0810 00:41:05.528095 2312 log.go:181] (0xc000da14a0) (0xc0008a9ae0) Create stream\nI0810 00:41:05.528142 2312 log.go:181] (0xc000da14a0) (0xc0008a9ae0) Stream added, broadcasting: 1\nI0810 00:41:05.532437 2312 log.go:181] (0xc000da14a0) Reply frame received for 1\nI0810 00:41:05.532464 2312 log.go:181] (0xc000da14a0) (0xc0004a6280) Create stream\nI0810 00:41:05.532472 2312 log.go:181] (0xc000da14a0) (0xc0004a6280) Stream added, broadcasting: 3\nI0810 00:41:05.533558 2312 log.go:181] (0xc000da14a0) Reply frame received for 3\nI0810 00:41:05.533616 2312 log.go:181] (0xc000da14a0) (0xc00021d0e0) Create stream\nI0810 00:41:05.533634 2312 log.go:181] (0xc000da14a0) (0xc00021d0e0) Stream added, broadcasting: 5\nI0810 00:41:05.534437 2312 log.go:181] (0xc000da14a0) Reply frame received for 5\nI0810 00:41:05.619729 2312 log.go:181] (0xc000da14a0) Data frame received for 5\nI0810 00:41:05.619780 2312 log.go:181] (0xc00021d0e0) (5) Data frame handling\nI0810 00:41:05.619796 2312 log.go:181] (0xc00021d0e0) (5) Data frame sent\nI0810 00:41:05.619807 2312 log.go:181] (0xc000da14a0) Data frame received for 5\nI0810 00:41:05.619817 2312 log.go:181] (0xc00021d0e0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0810 00:41:05.619866 2312 log.go:181] (0xc000da14a0) Data frame received for 3\nI0810 00:41:05.619926 2312 log.go:181] (0xc0004a6280) (3) Data frame handling\nI0810 00:41:05.621276 2312 log.go:181] (0xc000da14a0) Data frame received for 1\nI0810 00:41:05.621294 2312 log.go:181] (0xc0008a9ae0) (1) Data frame handling\nI0810 00:41:05.621303 2312 log.go:181] (0xc0008a9ae0) (1) Data frame sent\nI0810 00:41:05.621313 2312 log.go:181] (0xc000da14a0) (0xc0008a9ae0) Stream removed, broadcasting: 1\nI0810 00:41:05.621322 2312 log.go:181] (0xc000da14a0) Go away received\nI0810 00:41:05.621861 2312 log.go:181] (0xc000da14a0) (0xc0008a9ae0) Stream removed, broadcasting: 1\nI0810 00:41:05.621890 2312 log.go:181] (0xc000da14a0) (0xc0004a6280) Stream removed, broadcasting: 3\nI0810 00:41:05.621902 2312 log.go:181] (0xc000da14a0) (0xc00021d0e0) Stream removed, broadcasting: 5\n" Aug 10 00:41:05.627: INFO: stdout: "" Aug 10 00:41:05.627: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-4383 execpod6wmpj -- /bin/sh -x -c nc -zv -t -w 2 10.96.58.181 80' Aug 10 00:41:05.841: INFO: stderr: "I0810 00:41:05.755847 2330 log.go:181] (0xc000f93080) (0xc000bfb7c0) Create stream\nI0810 00:41:05.755914 2330 log.go:181] (0xc000f93080) (0xc000bfb7c0) Stream added, broadcasting: 1\nI0810 00:41:05.761603 2330 log.go:181] (0xc000f93080) Reply frame received for 1\nI0810 00:41:05.761653 2330 log.go:181] (0xc000f93080) (0xc000a2c6e0) Create stream\nI0810 00:41:05.761672 2330 log.go:181] (0xc000f93080) (0xc000a2c6e0) Stream added, broadcasting: 3\nI0810 00:41:05.762823 2330 log.go:181] (0xc000f93080) Reply frame received for 3\nI0810 00:41:05.762866 2330 log.go:181] (0xc000f93080) (0xc0007aa280) Create stream\nI0810 00:41:05.762888 2330 log.go:181] (0xc000f93080) (0xc0007aa280) Stream added, broadcasting: 5\nI0810 00:41:05.764017 2330 log.go:181] (0xc000f93080) Reply frame received for 5\nI0810 00:41:05.833097 2330 log.go:181] (0xc000f93080) Data frame received for 3\nI0810 00:41:05.833148 2330 log.go:181] (0xc000a2c6e0) (3) Data frame handling\nI0810 00:41:05.833170 2330 log.go:181] (0xc000f93080) Data frame received for 5\nI0810 00:41:05.833180 2330 log.go:181] (0xc0007aa280) (5) Data frame handling\nI0810 00:41:05.833193 2330 log.go:181] (0xc0007aa280) (5) Data frame sent\nI0810 00:41:05.833202 2330 log.go:181] (0xc000f93080) Data frame received for 5\nI0810 00:41:05.833208 2330 log.go:181] (0xc0007aa280) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.58.181 80\nConnection to 10.96.58.181 80 port [tcp/http] succeeded!\nI0810 00:41:05.834614 2330 log.go:181] (0xc000f93080) Data frame received for 1\nI0810 00:41:05.834641 2330 log.go:181] (0xc000bfb7c0) (1) Data frame handling\nI0810 00:41:05.834658 2330 log.go:181] (0xc000bfb7c0) (1) Data frame sent\nI0810 00:41:05.834787 2330 log.go:181] (0xc000f93080) (0xc000bfb7c0) Stream removed, broadcasting: 1\nI0810 00:41:05.834844 2330 log.go:181] (0xc000f93080) Go away received\nI0810 00:41:05.835260 2330 log.go:181] (0xc000f93080) (0xc000bfb7c0) Stream removed, broadcasting: 1\nI0810 00:41:05.835284 2330 log.go:181] (0xc000f93080) (0xc000a2c6e0) Stream removed, broadcasting: 3\nI0810 00:41:05.835293 2330 log.go:181] (0xc000f93080) (0xc0007aa280) Stream removed, broadcasting: 5\n" Aug 10 00:41:05.841: INFO: stdout: "" Aug 10 00:41:05.841: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:41:05.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4383" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:11.839 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":228,"skipped":3731,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:41:05.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-888165f0-a880-4022-bce4-556b109032d7 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-888165f0-a880-4022-bce4-556b109032d7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:41:12.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7101" for this suite. • [SLOW TEST:6.244 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":229,"skipped":3753,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:41:12.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2965 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2965 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2965 Aug 10 00:41:12.276: INFO: Found 0 stateful pods, waiting for 1 Aug 10 00:41:22.281: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 10 00:41:22.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2965 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 00:41:22.555: INFO: stderr: "I0810 00:41:22.428414 2348 log.go:181] (0xc000791550) (0xc000e98b40) Create stream\nI0810 00:41:22.428467 2348 log.go:181] (0xc000791550) (0xc000e98b40) Stream added, broadcasting: 1\nI0810 00:41:22.434059 2348 log.go:181] (0xc000791550) Reply frame received for 1\nI0810 00:41:22.434115 2348 log.go:181] (0xc000791550) (0xc00028a500) Create stream\nI0810 00:41:22.434134 2348 log.go:181] (0xc000791550) (0xc00028a500) Stream added, broadcasting: 3\nI0810 00:41:22.436498 2348 log.go:181] (0xc000791550) Reply frame received for 3\nI0810 00:41:22.436544 2348 log.go:181] (0xc000791550) (0xc00014b040) Create stream\nI0810 00:41:22.436560 2348 log.go:181] (0xc000791550) (0xc00014b040) Stream added, broadcasting: 5\nI0810 00:41:22.438215 2348 log.go:181] (0xc000791550) Reply frame received for 5\nI0810 00:41:22.511541 2348 log.go:181] (0xc000791550) Data frame received for 5\nI0810 00:41:22.511573 2348 log.go:181] (0xc00014b040) (5) Data frame handling\nI0810 00:41:22.511595 2348 log.go:181] (0xc00014b040) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 00:41:22.546035 2348 log.go:181] (0xc000791550) Data frame received for 3\nI0810 00:41:22.546068 2348 log.go:181] (0xc00028a500) (3) Data frame handling\nI0810 00:41:22.546095 2348 log.go:181] (0xc00028a500) (3) Data frame sent\nI0810 00:41:22.546270 2348 log.go:181] (0xc000791550) Data frame received for 3\nI0810 00:41:22.546302 2348 log.go:181] (0xc00028a500) (3) Data frame handling\nI0810 00:41:22.546598 2348 log.go:181] (0xc000791550) Data frame received for 5\nI0810 00:41:22.546681 2348 log.go:181] (0xc00014b040) (5) Data frame handling\nI0810 00:41:22.549265 2348 log.go:181] (0xc000791550) Data frame received for 1\nI0810 00:41:22.549290 2348 log.go:181] (0xc000e98b40) (1) Data frame handling\nI0810 00:41:22.549301 2348 log.go:181] (0xc000e98b40) (1) Data frame sent\nI0810 00:41:22.549324 2348 log.go:181] (0xc000791550) (0xc000e98b40) Stream removed, broadcasting: 1\nI0810 00:41:22.549344 2348 log.go:181] (0xc000791550) Go away received\nI0810 00:41:22.549944 2348 log.go:181] (0xc000791550) (0xc000e98b40) Stream removed, broadcasting: 1\nI0810 00:41:22.549988 2348 log.go:181] (0xc000791550) (0xc00028a500) Stream removed, broadcasting: 3\nI0810 00:41:22.550008 2348 log.go:181] (0xc000791550) (0xc00014b040) Stream removed, broadcasting: 5\n" Aug 10 00:41:22.556: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 00:41:22.556: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 10 00:41:22.558: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 10 00:41:32.563: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 10 00:41:32.563: INFO: Waiting for statefulset status.replicas updated to 0 Aug 10 00:41:32.599: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999342s Aug 10 00:41:33.603: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.974894214s Aug 10 00:41:34.607: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.970430307s Aug 10 00:41:35.611: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.966818311s Aug 10 00:41:36.618: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.962092015s Aug 10 00:41:37.623: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.955630145s Aug 10 00:41:38.648: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.951017556s Aug 10 00:41:39.653: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.925987651s Aug 10 00:41:40.657: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.92105506s Aug 10 00:41:41.662: INFO: Verifying statefulset ss doesn't scale past 1 for another 916.213467ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2965 Aug 10 00:41:42.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2965 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 00:41:42.904: INFO: stderr: "I0810 00:41:42.812818 2366 log.go:181] (0xc000d21290) (0xc000c21cc0) Create stream\nI0810 00:41:42.812877 2366 log.go:181] (0xc000d21290) (0xc000c21cc0) Stream added, broadcasting: 1\nI0810 00:41:42.818103 2366 log.go:181] (0xc000d21290) Reply frame received for 1\nI0810 00:41:42.818164 2366 log.go:181] (0xc000d21290) (0xc000c100a0) Create stream\nI0810 00:41:42.818178 2366 log.go:181] (0xc000d21290) (0xc000c100a0) Stream added, broadcasting: 3\nI0810 00:41:42.819025 2366 log.go:181] (0xc000d21290) Reply frame received for 3\nI0810 00:41:42.819059 2366 log.go:181] (0xc000d21290) (0xc000bfe960) Create stream\nI0810 00:41:42.819073 2366 log.go:181] (0xc000d21290) (0xc000bfe960) Stream added, broadcasting: 5\nI0810 00:41:42.819831 2366 log.go:181] (0xc000d21290) Reply frame received for 5\nI0810 00:41:42.893799 2366 log.go:181] (0xc000d21290) Data frame received for 5\nI0810 00:41:42.893848 2366 log.go:181] (0xc000bfe960) (5) Data frame handling\nI0810 00:41:42.893873 2366 log.go:181] (0xc000bfe960) (5) Data frame sent\nI0810 00:41:42.893891 2366 log.go:181] (0xc000d21290) Data frame received for 5\nI0810 00:41:42.893907 2366 log.go:181] (0xc000bfe960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0810 00:41:42.893979 2366 log.go:181] (0xc000d21290) Data frame received for 3\nI0810 00:41:42.894025 2366 log.go:181] (0xc000c100a0) (3) Data frame handling\nI0810 00:41:42.894048 2366 log.go:181] (0xc000c100a0) (3) Data frame sent\nI0810 00:41:42.894257 2366 log.go:181] (0xc000d21290) Data frame received for 3\nI0810 00:41:42.894294 2366 log.go:181] (0xc000c100a0) (3) Data frame handling\nI0810 00:41:42.896505 2366 log.go:181] (0xc000d21290) Data frame received for 1\nI0810 00:41:42.896536 2366 log.go:181] (0xc000c21cc0) (1) Data frame handling\nI0810 00:41:42.896558 2366 log.go:181] (0xc000c21cc0) (1) Data frame sent\nI0810 00:41:42.896573 2366 log.go:181] (0xc000d21290) (0xc000c21cc0) Stream removed, broadcasting: 1\nI0810 00:41:42.897138 2366 log.go:181] (0xc000d21290) Go away received\nI0810 00:41:42.897234 2366 log.go:181] (0xc000d21290) (0xc000c21cc0) Stream removed, broadcasting: 1\nI0810 00:41:42.897271 2366 log.go:181] (0xc000d21290) (0xc000c100a0) Stream removed, broadcasting: 3\nI0810 00:41:42.897298 2366 log.go:181] (0xc000d21290) (0xc000bfe960) Stream removed, broadcasting: 5\n" Aug 10 00:41:42.904: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 10 00:41:42.904: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 10 00:41:42.907: INFO: Found 1 stateful pods, waiting for 3 Aug 10 00:41:52.913: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:41:52.913: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 10 00:41:52.913: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 10 00:41:52.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2965 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 00:41:53.184: INFO: stderr: "I0810 00:41:53.096952 2384 log.go:181] (0xc000e238c0) (0xc00095fe00) Create stream\nI0810 00:41:53.097013 2384 log.go:181] (0xc000e238c0) (0xc00095fe00) Stream added, broadcasting: 1\nI0810 00:41:53.099699 2384 log.go:181] (0xc000e238c0) Reply frame received for 1\nI0810 00:41:53.099806 2384 log.go:181] (0xc000e238c0) (0xc00059bcc0) Create stream\nI0810 00:41:53.099819 2384 log.go:181] (0xc000e238c0) (0xc00059bcc0) Stream added, broadcasting: 3\nI0810 00:41:53.100906 2384 log.go:181] (0xc000e238c0) Reply frame received for 3\nI0810 00:41:53.100939 2384 log.go:181] (0xc000e238c0) (0xc000969180) Create stream\nI0810 00:41:53.100949 2384 log.go:181] (0xc000e238c0) (0xc000969180) Stream added, broadcasting: 5\nI0810 00:41:53.101728 2384 log.go:181] (0xc000e238c0) Reply frame received for 5\nI0810 00:41:53.170770 2384 log.go:181] (0xc000e238c0) Data frame received for 5\nI0810 00:41:53.170837 2384 log.go:181] (0xc000969180) (5) Data frame handling\nI0810 00:41:53.170867 2384 log.go:181] (0xc000969180) (5) Data frame sent\nI0810 00:41:53.170890 2384 log.go:181] (0xc000e238c0) Data frame received for 5\nI0810 00:41:53.170923 2384 log.go:181] (0xc000969180) (5) Data frame handling\nI0810 00:41:53.170948 2384 log.go:181] (0xc000e238c0) Data frame received for 3\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 00:41:53.170989 2384 log.go:181] (0xc00059bcc0) (3) Data frame handling\nI0810 00:41:53.171056 2384 log.go:181] (0xc00059bcc0) (3) Data frame sent\nI0810 00:41:53.171083 2384 log.go:181] (0xc000e238c0) Data frame received for 3\nI0810 00:41:53.171099 2384 log.go:181] (0xc00059bcc0) (3) Data frame handling\nI0810 00:41:53.177293 2384 log.go:181] (0xc000e238c0) Data frame received for 1\nI0810 00:41:53.177330 2384 log.go:181] (0xc00095fe00) (1) Data frame handling\nI0810 00:41:53.177350 2384 log.go:181] (0xc00095fe00) (1) Data frame sent\nI0810 00:41:53.177371 2384 log.go:181] (0xc000e238c0) (0xc00095fe00) Stream removed, broadcasting: 1\nI0810 00:41:53.177394 2384 log.go:181] (0xc000e238c0) Go away received\nI0810 00:41:53.177902 2384 log.go:181] (0xc000e238c0) (0xc00095fe00) Stream removed, broadcasting: 1\nI0810 00:41:53.177930 2384 log.go:181] (0xc000e238c0) (0xc00059bcc0) Stream removed, broadcasting: 3\nI0810 00:41:53.177945 2384 log.go:181] (0xc000e238c0) (0xc000969180) Stream removed, broadcasting: 5\n" Aug 10 00:41:53.184: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 00:41:53.184: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 10 00:41:53.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2965 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 00:41:53.414: INFO: stderr: "I0810 00:41:53.316529 2402 log.go:181] (0xc0006d6f20) (0xc000c01360) Create stream\nI0810 00:41:53.316610 2402 log.go:181] (0xc0006d6f20) (0xc000c01360) Stream added, broadcasting: 1\nI0810 00:41:53.321022 2402 log.go:181] (0xc0006d6f20) Reply frame received for 1\nI0810 00:41:53.321061 2402 log.go:181] (0xc0006d6f20) (0xc000a188c0) Create stream\nI0810 00:41:53.321069 2402 log.go:181] (0xc0006d6f20) (0xc000a188c0) Stream added, broadcasting: 3\nI0810 00:41:53.321985 2402 log.go:181] (0xc0006d6f20) Reply frame received for 3\nI0810 00:41:53.322013 2402 log.go:181] (0xc0006d6f20) (0xc000a19900) Create stream\nI0810 00:41:53.322026 2402 log.go:181] (0xc0006d6f20) (0xc000a19900) Stream added, broadcasting: 5\nI0810 00:41:53.323041 2402 log.go:181] (0xc0006d6f20) Reply frame received for 5\nI0810 00:41:53.372201 2402 log.go:181] (0xc0006d6f20) Data frame received for 5\nI0810 00:41:53.372227 2402 log.go:181] (0xc000a19900) (5) Data frame handling\nI0810 00:41:53.372250 2402 log.go:181] (0xc000a19900) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 00:41:53.406410 2402 log.go:181] (0xc0006d6f20) Data frame received for 3\nI0810 00:41:53.406539 2402 log.go:181] (0xc000a188c0) (3) Data frame handling\nI0810 00:41:53.406572 2402 log.go:181] (0xc000a188c0) (3) Data frame sent\nI0810 00:41:53.406739 2402 log.go:181] (0xc0006d6f20) Data frame received for 3\nI0810 00:41:53.406766 2402 log.go:181] (0xc000a188c0) (3) Data frame handling\nI0810 00:41:53.406964 2402 log.go:181] (0xc0006d6f20) Data frame received for 5\nI0810 00:41:53.406989 2402 log.go:181] (0xc000a19900) (5) Data frame handling\nI0810 00:41:53.408487 2402 log.go:181] (0xc0006d6f20) Data frame received for 1\nI0810 00:41:53.408512 2402 log.go:181] (0xc000c01360) (1) Data frame handling\nI0810 00:41:53.408529 2402 log.go:181] (0xc000c01360) (1) Data frame sent\nI0810 00:41:53.408549 2402 log.go:181] (0xc0006d6f20) (0xc000c01360) Stream removed, broadcasting: 1\nI0810 00:41:53.408581 2402 log.go:181] (0xc0006d6f20) Go away received\nI0810 00:41:53.409119 2402 log.go:181] (0xc0006d6f20) (0xc000c01360) Stream removed, broadcasting: 1\nI0810 00:41:53.409143 2402 log.go:181] (0xc0006d6f20) (0xc000a188c0) Stream removed, broadcasting: 3\nI0810 00:41:53.409154 2402 log.go:181] (0xc0006d6f20) (0xc000a19900) Stream removed, broadcasting: 5\n" Aug 10 00:41:53.414: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 00:41:53.414: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 10 00:41:53.414: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2965 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 10 00:41:53.674: INFO: stderr: "I0810 00:41:53.558063 2420 log.go:181] (0xc0009b6f20) (0xc000a0ab40) Create stream\nI0810 00:41:53.558136 2420 log.go:181] (0xc0009b6f20) (0xc000a0ab40) Stream added, broadcasting: 1\nI0810 00:41:53.559654 2420 log.go:181] (0xc0009b6f20) Reply frame received for 1\nI0810 00:41:53.559681 2420 log.go:181] (0xc0009b6f20) (0xc00078ebe0) Create stream\nI0810 00:41:53.559688 2420 log.go:181] (0xc0009b6f20) (0xc00078ebe0) Stream added, broadcasting: 3\nI0810 00:41:53.560496 2420 log.go:181] (0xc0009b6f20) Reply frame received for 3\nI0810 00:41:53.560525 2420 log.go:181] (0xc0009b6f20) (0xc0000b0640) Create stream\nI0810 00:41:53.560542 2420 log.go:181] (0xc0009b6f20) (0xc0000b0640) Stream added, broadcasting: 5\nI0810 00:41:53.561457 2420 log.go:181] (0xc0009b6f20) Reply frame received for 5\nI0810 00:41:53.625890 2420 log.go:181] (0xc0009b6f20) Data frame received for 5\nI0810 00:41:53.625919 2420 log.go:181] (0xc0000b0640) (5) Data frame handling\nI0810 00:41:53.625933 2420 log.go:181] (0xc0000b0640) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0810 00:41:53.663674 2420 log.go:181] (0xc0009b6f20) Data frame received for 3\nI0810 00:41:53.663697 2420 log.go:181] (0xc00078ebe0) (3) Data frame handling\nI0810 00:41:53.663717 2420 log.go:181] (0xc00078ebe0) (3) Data frame sent\nI0810 00:41:53.665353 2420 log.go:181] (0xc0009b6f20) Data frame received for 3\nI0810 00:41:53.665392 2420 log.go:181] (0xc00078ebe0) (3) Data frame handling\nI0810 00:41:53.665415 2420 log.go:181] (0xc0009b6f20) Data frame received for 5\nI0810 00:41:53.665437 2420 log.go:181] (0xc0000b0640) (5) Data frame handling\nI0810 00:41:53.667612 2420 log.go:181] (0xc0009b6f20) Data frame received for 1\nI0810 00:41:53.667656 2420 log.go:181] (0xc000a0ab40) (1) Data frame handling\nI0810 00:41:53.667692 2420 log.go:181] (0xc000a0ab40) (1) Data frame sent\nI0810 00:41:53.667722 2420 log.go:181] (0xc0009b6f20) (0xc000a0ab40) Stream removed, broadcasting: 1\nI0810 00:41:53.668161 2420 log.go:181] (0xc0009b6f20) (0xc000a0ab40) Stream removed, broadcasting: 1\nI0810 00:41:53.668192 2420 log.go:181] (0xc0009b6f20) (0xc00078ebe0) Stream removed, broadcasting: 3\nI0810 00:41:53.668205 2420 log.go:181] (0xc0009b6f20) (0xc0000b0640) Stream removed, broadcasting: 5\n" Aug 10 00:41:53.674: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 10 00:41:53.674: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 10 00:41:53.674: INFO: Waiting for statefulset status.replicas updated to 0 Aug 10 00:41:53.677: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Aug 10 00:42:03.684: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 10 00:42:03.684: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 10 00:42:03.684: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 10 00:42:03.716: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999626s Aug 10 00:42:04.722: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976574929s Aug 10 00:42:05.733: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97140632s Aug 10 00:42:06.738: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.959764917s Aug 10 00:42:07.743: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.95466448s Aug 10 00:42:08.748: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.950138365s Aug 10 00:42:09.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.945507411s Aug 10 00:42:10.756: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.941841036s Aug 10 00:42:11.785: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.936543017s Aug 10 00:42:12.790: INFO: Verifying statefulset ss doesn't scale past 3 for another 907.785519ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2965 Aug 10 00:42:13.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2965 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 00:42:14.007: INFO: stderr: "I0810 00:42:13.933744 2438 log.go:181] (0xc000c0cc60) (0xc0008f6500) Create stream\nI0810 00:42:13.933828 2438 log.go:181] (0xc000c0cc60) (0xc0008f6500) Stream added, broadcasting: 1\nI0810 00:42:13.938768 2438 log.go:181] (0xc000c0cc60) Reply frame received for 1\nI0810 00:42:13.938820 2438 log.go:181] (0xc000c0cc60) (0xc000b27d60) Create stream\nI0810 00:42:13.938837 2438 log.go:181] (0xc000c0cc60) (0xc000b27d60) Stream added, broadcasting: 3\nI0810 00:42:13.939897 2438 log.go:181] (0xc000c0cc60) Reply frame received for 3\nI0810 00:42:13.939937 2438 log.go:181] (0xc000c0cc60) (0xc000b10820) Create stream\nI0810 00:42:13.939950 2438 log.go:181] (0xc000c0cc60) (0xc000b10820) Stream added, broadcasting: 5\nI0810 00:42:13.940986 2438 log.go:181] (0xc000c0cc60) Reply frame received for 5\nI0810 00:42:14.000060 2438 log.go:181] (0xc000c0cc60) Data frame received for 5\nI0810 00:42:14.000099 2438 log.go:181] (0xc000b10820) (5) Data frame handling\nI0810 00:42:14.000114 2438 log.go:181] (0xc000b10820) (5) Data frame sent\nI0810 00:42:14.000123 2438 log.go:181] (0xc000c0cc60) Data frame received for 5\nI0810 00:42:14.000131 2438 log.go:181] (0xc000b10820) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0810 00:42:14.000154 2438 log.go:181] (0xc000c0cc60) Data frame received for 3\nI0810 00:42:14.000162 2438 log.go:181] (0xc000b27d60) (3) Data frame handling\nI0810 00:42:14.000176 2438 log.go:181] (0xc000b27d60) (3) Data frame sent\nI0810 00:42:14.000185 2438 log.go:181] (0xc000c0cc60) Data frame received for 3\nI0810 00:42:14.000194 2438 log.go:181] (0xc000b27d60) (3) Data frame handling\nI0810 00:42:14.001521 2438 log.go:181] (0xc000c0cc60) Data frame received for 1\nI0810 00:42:14.001550 2438 log.go:181] (0xc0008f6500) (1) Data frame handling\nI0810 00:42:14.001567 2438 log.go:181] (0xc0008f6500) (1) Data frame sent\nI0810 00:42:14.001580 2438 log.go:181] (0xc000c0cc60) (0xc0008f6500) Stream removed, broadcasting: 1\nI0810 00:42:14.001595 2438 log.go:181] (0xc000c0cc60) Go away received\nI0810 00:42:14.001861 2438 log.go:181] (0xc000c0cc60) (0xc0008f6500) Stream removed, broadcasting: 1\nI0810 00:42:14.001875 2438 log.go:181] (0xc000c0cc60) (0xc000b27d60) Stream removed, broadcasting: 3\nI0810 00:42:14.001881 2438 log.go:181] (0xc000c0cc60) (0xc000b10820) Stream removed, broadcasting: 5\n" Aug 10 00:42:14.007: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 10 00:42:14.007: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 10 00:42:14.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2965 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 00:42:14.235: INFO: stderr: "I0810 00:42:14.146155 2457 log.go:181] (0xc000ca0d10) (0xc000b730e0) Create stream\nI0810 00:42:14.146205 2457 log.go:181] (0xc000ca0d10) (0xc000b730e0) Stream added, broadcasting: 1\nI0810 00:42:14.148054 2457 log.go:181] (0xc000ca0d10) Reply frame received for 1\nI0810 00:42:14.148103 2457 log.go:181] (0xc000ca0d10) (0xc00096c780) Create stream\nI0810 00:42:14.148128 2457 log.go:181] (0xc000ca0d10) (0xc00096c780) Stream added, broadcasting: 3\nI0810 00:42:14.149221 2457 log.go:181] (0xc000ca0d10) Reply frame received for 3\nI0810 00:42:14.149253 2457 log.go:181] (0xc000ca0d10) (0xc000c98140) Create stream\nI0810 00:42:14.149275 2457 log.go:181] (0xc000ca0d10) (0xc000c98140) Stream added, broadcasting: 5\nI0810 00:42:14.150146 2457 log.go:181] (0xc000ca0d10) Reply frame received for 5\nI0810 00:42:14.226119 2457 log.go:181] (0xc000ca0d10) Data frame received for 3\nI0810 00:42:14.226163 2457 log.go:181] (0xc00096c780) (3) Data frame handling\nI0810 00:42:14.226175 2457 log.go:181] (0xc00096c780) (3) Data frame sent\nI0810 00:42:14.226254 2457 log.go:181] (0xc000ca0d10) Data frame received for 5\nI0810 00:42:14.226323 2457 log.go:181] (0xc000c98140) (5) Data frame handling\nI0810 00:42:14.226347 2457 log.go:181] (0xc000c98140) (5) Data frame sent\nI0810 00:42:14.226369 2457 log.go:181] (0xc000ca0d10) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0810 00:42:14.226388 2457 log.go:181] (0xc000c98140) (5) Data frame handling\nI0810 00:42:14.226439 2457 log.go:181] (0xc000ca0d10) Data frame received for 3\nI0810 00:42:14.226464 2457 log.go:181] (0xc00096c780) (3) Data frame handling\nI0810 00:42:14.227814 2457 log.go:181] (0xc000ca0d10) Data frame received for 1\nI0810 00:42:14.227832 2457 log.go:181] (0xc000b730e0) (1) Data frame handling\nI0810 00:42:14.227843 2457 log.go:181] (0xc000b730e0) (1) Data frame sent\nI0810 00:42:14.227857 2457 log.go:181] (0xc000ca0d10) (0xc000b730e0) Stream removed, broadcasting: 1\nI0810 00:42:14.227890 2457 log.go:181] (0xc000ca0d10) Go away received\nI0810 00:42:14.228320 2457 log.go:181] (0xc000ca0d10) (0xc000b730e0) Stream removed, broadcasting: 1\nI0810 00:42:14.228339 2457 log.go:181] (0xc000ca0d10) (0xc00096c780) Stream removed, broadcasting: 3\nI0810 00:42:14.228350 2457 log.go:181] (0xc000ca0d10) (0xc000c98140) Stream removed, broadcasting: 5\n" Aug 10 00:42:14.235: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 10 00:42:14.235: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 10 00:42:14.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2965 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 10 00:42:14.470: INFO: stderr: "I0810 00:42:14.383429 2475 log.go:181] (0xc000140420) (0xc000b39180) Create stream\nI0810 00:42:14.383501 2475 log.go:181] (0xc000140420) (0xc000b39180) Stream added, broadcasting: 1\nI0810 00:42:14.385532 2475 log.go:181] (0xc000140420) Reply frame received for 1\nI0810 00:42:14.385568 2475 log.go:181] (0xc000140420) (0xc000b4ac80) Create stream\nI0810 00:42:14.385580 2475 log.go:181] (0xc000140420) (0xc000b4ac80) Stream added, broadcasting: 3\nI0810 00:42:14.386914 2475 log.go:181] (0xc000140420) Reply frame received for 3\nI0810 00:42:14.386949 2475 log.go:181] (0xc000140420) (0xc000b2e960) Create stream\nI0810 00:42:14.386960 2475 log.go:181] (0xc000140420) (0xc000b2e960) Stream added, broadcasting: 5\nI0810 00:42:14.388069 2475 log.go:181] (0xc000140420) Reply frame received for 5\nI0810 00:42:14.462659 2475 log.go:181] (0xc000140420) Data frame received for 5\nI0810 00:42:14.462704 2475 log.go:181] (0xc000b2e960) (5) Data frame handling\nI0810 00:42:14.462726 2475 log.go:181] (0xc000b2e960) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0810 00:42:14.462760 2475 log.go:181] (0xc000140420) Data frame received for 5\nI0810 00:42:14.462773 2475 log.go:181] (0xc000b2e960) (5) Data frame handling\nI0810 00:42:14.462799 2475 log.go:181] (0xc000140420) Data frame received for 3\nI0810 00:42:14.462828 2475 log.go:181] (0xc000b4ac80) (3) Data frame handling\nI0810 00:42:14.462859 2475 log.go:181] (0xc000b4ac80) (3) Data frame sent\nI0810 00:42:14.462874 2475 log.go:181] (0xc000140420) Data frame received for 3\nI0810 00:42:14.462884 2475 log.go:181] (0xc000b4ac80) (3) Data frame handling\nI0810 00:42:14.464133 2475 log.go:181] (0xc000140420) Data frame received for 1\nI0810 00:42:14.464150 2475 log.go:181] (0xc000b39180) (1) Data frame handling\nI0810 00:42:14.464162 2475 log.go:181] (0xc000b39180) (1) Data frame sent\nI0810 00:42:14.464176 2475 log.go:181] (0xc000140420) (0xc000b39180) Stream removed, broadcasting: 1\nI0810 00:42:14.464201 2475 log.go:181] (0xc000140420) Go away received\nI0810 00:42:14.464568 2475 log.go:181] (0xc000140420) (0xc000b39180) Stream removed, broadcasting: 1\nI0810 00:42:14.464593 2475 log.go:181] (0xc000140420) (0xc000b4ac80) Stream removed, broadcasting: 3\nI0810 00:42:14.464603 2475 log.go:181] (0xc000140420) (0xc000b2e960) Stream removed, broadcasting: 5\n" Aug 10 00:42:14.470: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 10 00:42:14.470: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 10 00:42:14.470: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Aug 10 00:42:34.483: INFO: Deleting all statefulset in ns statefulset-2965 Aug 10 00:42:34.486: INFO: Scaling statefulset ss to 0 Aug 10 00:42:34.496: INFO: Waiting for statefulset status.replicas updated to 0 Aug 10 00:42:34.499: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:42:34.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2965" for this suite. • [SLOW TEST:82.363 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":230,"skipped":3823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:42:34.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 00:42:34.672: INFO: Waiting up to 5m0s for pod "downwardapi-volume-85c3e9da-28ec-4126-bced-8c2c209fd1be" in namespace "projected-5871" to be "Succeeded or Failed" Aug 10 00:42:34.702: INFO: Pod "downwardapi-volume-85c3e9da-28ec-4126-bced-8c2c209fd1be": Phase="Pending", Reason="", readiness=false. Elapsed: 29.234896ms Aug 10 00:42:36.705: INFO: Pod "downwardapi-volume-85c3e9da-28ec-4126-bced-8c2c209fd1be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032937913s Aug 10 00:42:38.710: INFO: Pod "downwardapi-volume-85c3e9da-28ec-4126-bced-8c2c209fd1be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037274638s STEP: Saw pod success Aug 10 00:42:38.710: INFO: Pod "downwardapi-volume-85c3e9da-28ec-4126-bced-8c2c209fd1be" satisfied condition "Succeeded or Failed" Aug 10 00:42:38.713: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-85c3e9da-28ec-4126-bced-8c2c209fd1be container client-container: STEP: delete the pod Aug 10 00:42:38.865: INFO: Waiting for pod downwardapi-volume-85c3e9da-28ec-4126-bced-8c2c209fd1be to disappear Aug 10 00:42:38.874: INFO: Pod downwardapi-volume-85c3e9da-28ec-4126-bced-8c2c209fd1be no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:42:38.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5871" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":231,"skipped":3856,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:42:38.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Aug 10 00:42:44.081: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:42:44.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9765" for this suite. • [SLOW TEST:5.335 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":232,"skipped":3864,"failed":0} SSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:42:44.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-766, will wait for the garbage collector to delete the pods Aug 10 00:42:50.428: INFO: Deleting Job.batch foo took: 5.930611ms Aug 10 00:42:50.629: INFO: Terminating Job.batch foo pods took: 200.290802ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:43:24.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-766" for this suite. • [SLOW TEST:40.237 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":233,"skipped":3869,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:43:24.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6322 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6322 STEP: creating replication controller externalsvc in namespace services-6322 I0810 00:43:24.814535 8 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6322, replica count: 2 I0810 00:43:27.865095 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 00:43:30.865361 8 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Aug 10 00:43:30.945: INFO: Creating new exec pod Aug 10 00:43:34.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-6322 execpodbk9jn -- /bin/sh -x -c nslookup nodeport-service.services-6322.svc.cluster.local' Aug 10 00:43:35.194: INFO: stderr: "I0810 00:43:35.102680 2493 log.go:181] (0xc000f00e70) (0xc000a7e960) Create stream\nI0810 00:43:35.102757 2493 log.go:181] (0xc000f00e70) (0xc000a7e960) Stream added, broadcasting: 1\nI0810 00:43:35.104644 2493 log.go:181] (0xc000f00e70) Reply frame received for 1\nI0810 00:43:35.104682 2493 log.go:181] (0xc000f00e70) (0xc0000bf5e0) Create stream\nI0810 00:43:35.104690 2493 log.go:181] (0xc000f00e70) (0xc0000bf5e0) Stream added, broadcasting: 3\nI0810 00:43:35.105708 2493 log.go:181] (0xc000f00e70) Reply frame received for 3\nI0810 00:43:35.105778 2493 log.go:181] (0xc000f00e70) (0xc0008e4280) Create stream\nI0810 00:43:35.105816 2493 log.go:181] (0xc000f00e70) (0xc0008e4280) Stream added, broadcasting: 5\nI0810 00:43:35.106712 2493 log.go:181] (0xc000f00e70) Reply frame received for 5\nI0810 00:43:35.176805 2493 log.go:181] (0xc000f00e70) Data frame received for 5\nI0810 00:43:35.176833 2493 log.go:181] (0xc0008e4280) (5) Data frame handling\nI0810 00:43:35.176847 2493 log.go:181] (0xc0008e4280) (5) Data frame sent\n+ nslookup nodeport-service.services-6322.svc.cluster.local\nI0810 00:43:35.184649 2493 log.go:181] (0xc000f00e70) Data frame received for 3\nI0810 00:43:35.184665 2493 log.go:181] (0xc0000bf5e0) (3) Data frame handling\nI0810 00:43:35.184674 2493 log.go:181] (0xc0000bf5e0) (3) Data frame sent\nI0810 00:43:35.185927 2493 log.go:181] (0xc000f00e70) Data frame received for 3\nI0810 00:43:35.185944 2493 log.go:181] (0xc0000bf5e0) (3) Data frame handling\nI0810 00:43:35.185958 2493 log.go:181] (0xc0000bf5e0) (3) Data frame sent\nI0810 00:43:35.186537 2493 log.go:181] (0xc000f00e70) Data frame received for 5\nI0810 00:43:35.186551 2493 log.go:181] (0xc0008e4280) (5) Data frame handling\nI0810 00:43:35.186574 2493 log.go:181] (0xc000f00e70) Data frame received for 3\nI0810 00:43:35.186586 2493 log.go:181] (0xc0000bf5e0) (3) Data frame handling\nI0810 00:43:35.188378 2493 log.go:181] (0xc000f00e70) Data frame received for 1\nI0810 00:43:35.188408 2493 log.go:181] (0xc000a7e960) (1) Data frame handling\nI0810 00:43:35.188424 2493 log.go:181] (0xc000a7e960) (1) Data frame sent\nI0810 00:43:35.188440 2493 log.go:181] (0xc000f00e70) (0xc000a7e960) Stream removed, broadcasting: 1\nI0810 00:43:35.188553 2493 log.go:181] (0xc000f00e70) Go away received\nI0810 00:43:35.188966 2493 log.go:181] (0xc000f00e70) (0xc000a7e960) Stream removed, broadcasting: 1\nI0810 00:43:35.188987 2493 log.go:181] (0xc000f00e70) (0xc0000bf5e0) Stream removed, broadcasting: 3\nI0810 00:43:35.188996 2493 log.go:181] (0xc000f00e70) (0xc0008e4280) Stream removed, broadcasting: 5\n" Aug 10 00:43:35.194: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-6322.svc.cluster.local\tcanonical name = externalsvc.services-6322.svc.cluster.local.\nName:\texternalsvc.services-6322.svc.cluster.local\nAddress: 10.107.35.127\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6322, will wait for the garbage collector to delete the pods Aug 10 00:43:35.328: INFO: Deleting ReplicationController externalsvc took: 81.391189ms Aug 10 00:43:35.829: INFO: Terminating ReplicationController externalsvc pods took: 500.252998ms Aug 10 00:43:40.458: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:43:40.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6322" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:16.057 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":234,"skipped":3879,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:43:40.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:43:40.575: INFO: Creating ReplicaSet my-hostname-basic-9ae9bdab-d689-4b32-934d-a9db10587995 Aug 10 00:43:40.596: INFO: Pod name my-hostname-basic-9ae9bdab-d689-4b32-934d-a9db10587995: Found 0 pods out of 1 Aug 10 00:43:45.604: INFO: Pod name my-hostname-basic-9ae9bdab-d689-4b32-934d-a9db10587995: Found 1 pods out of 1 Aug 10 00:43:45.604: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9ae9bdab-d689-4b32-934d-a9db10587995" is running Aug 10 00:43:45.607: INFO: Pod "my-hostname-basic-9ae9bdab-d689-4b32-934d-a9db10587995-8hq67" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-10 00:43:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-10 00:43:44 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-10 00:43:44 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-10 00:43:40 +0000 UTC Reason: Message:}]) Aug 10 00:43:45.607: INFO: Trying to dial the pod Aug 10 00:43:50.620: INFO: Controller my-hostname-basic-9ae9bdab-d689-4b32-934d-a9db10587995: Got expected result from replica 1 [my-hostname-basic-9ae9bdab-d689-4b32-934d-a9db10587995-8hq67]: "my-hostname-basic-9ae9bdab-d689-4b32-934d-a9db10587995-8hq67", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:43:50.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2238" for this suite. • [SLOW TEST:10.113 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":235,"skipped":3883,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:43:50.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Aug 10 00:43:50.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8768' Aug 10 00:43:51.054: INFO: stderr: "" Aug 10 00:43:51.054: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 10 00:43:51.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8768' Aug 10 00:43:51.195: INFO: stderr: "" Aug 10 00:43:51.195: INFO: stdout: "update-demo-nautilus-sfbnn update-demo-nautilus-wzw2d " Aug 10 00:43:51.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfbnn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8768' Aug 10 00:43:51.307: INFO: stderr: "" Aug 10 00:43:51.307: INFO: stdout: "" Aug 10 00:43:51.307: INFO: update-demo-nautilus-sfbnn is created but not running Aug 10 00:43:56.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8768' Aug 10 00:43:56.407: INFO: stderr: "" Aug 10 00:43:56.407: INFO: stdout: "update-demo-nautilus-sfbnn update-demo-nautilus-wzw2d " Aug 10 00:43:56.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfbnn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8768' Aug 10 00:43:56.503: INFO: stderr: "" Aug 10 00:43:56.503: INFO: stdout: "true" Aug 10 00:43:56.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfbnn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8768' Aug 10 00:43:56.607: INFO: stderr: "" Aug 10 00:43:56.607: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 10 00:43:56.607: INFO: validating pod update-demo-nautilus-sfbnn Aug 10 00:43:56.611: INFO: got data: { "image": "nautilus.jpg" } Aug 10 00:43:56.611: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 10 00:43:56.611: INFO: update-demo-nautilus-sfbnn is verified up and running Aug 10 00:43:56.611: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzw2d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8768' Aug 10 00:43:56.710: INFO: stderr: "" Aug 10 00:43:56.710: INFO: stdout: "true" Aug 10 00:43:56.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wzw2d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8768' Aug 10 00:43:56.811: INFO: stderr: "" Aug 10 00:43:56.811: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 10 00:43:56.811: INFO: validating pod update-demo-nautilus-wzw2d Aug 10 00:43:56.815: INFO: got data: { "image": "nautilus.jpg" } Aug 10 00:43:56.815: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 10 00:43:56.815: INFO: update-demo-nautilus-wzw2d is verified up and running STEP: scaling down the replication controller Aug 10 00:43:56.818: INFO: scanned /root for discovery docs: Aug 10 00:43:56.819: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8768' Aug 10 00:43:57.953: INFO: stderr: "" Aug 10 00:43:57.953: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 10 00:43:57.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8768' Aug 10 00:43:58.067: INFO: stderr: "" Aug 10 00:43:58.067: INFO: stdout: "update-demo-nautilus-sfbnn update-demo-nautilus-wzw2d " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 10 00:44:03.067: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8768' Aug 10 00:44:03.184: INFO: stderr: "" Aug 10 00:44:03.184: INFO: stdout: "update-demo-nautilus-sfbnn update-demo-nautilus-wzw2d " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 10 00:44:08.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8768' Aug 10 00:44:08.303: INFO: stderr: "" Aug 10 00:44:08.303: INFO: stdout: "update-demo-nautilus-sfbnn " Aug 10 00:44:08.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfbnn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8768' Aug 10 00:44:08.403: INFO: stderr: "" Aug 10 00:44:08.403: INFO: stdout: "true" Aug 10 00:44:08.404: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfbnn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8768' Aug 10 00:44:08.507: INFO: stderr: "" Aug 10 00:44:08.507: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 10 00:44:08.507: INFO: validating pod update-demo-nautilus-sfbnn Aug 10 00:44:08.511: INFO: got data: { "image": "nautilus.jpg" } Aug 10 00:44:08.511: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 10 00:44:08.511: INFO: update-demo-nautilus-sfbnn is verified up and running STEP: scaling up the replication controller Aug 10 00:44:08.513: INFO: scanned /root for discovery docs: Aug 10 00:44:08.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8768' Aug 10 00:44:09.649: INFO: stderr: "" Aug 10 00:44:09.649: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 10 00:44:09.649: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8768' Aug 10 00:44:09.807: INFO: stderr: "" Aug 10 00:44:09.807: INFO: stdout: "update-demo-nautilus-8pbpr update-demo-nautilus-sfbnn " Aug 10 00:44:09.807: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8pbpr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8768' Aug 10 00:44:09.912: INFO: stderr: "" Aug 10 00:44:09.912: INFO: stdout: "" Aug 10 00:44:09.912: INFO: update-demo-nautilus-8pbpr is created but not running Aug 10 00:44:14.913: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8768' Aug 10 00:44:15.030: INFO: stderr: "" Aug 10 00:44:15.030: INFO: stdout: "update-demo-nautilus-8pbpr update-demo-nautilus-sfbnn " Aug 10 00:44:15.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8pbpr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8768' Aug 10 00:44:15.130: INFO: stderr: "" Aug 10 00:44:15.130: INFO: stdout: "true" Aug 10 00:44:15.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8pbpr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8768' Aug 10 00:44:15.240: INFO: stderr: "" Aug 10 00:44:15.240: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 10 00:44:15.240: INFO: validating pod update-demo-nautilus-8pbpr Aug 10 00:44:15.244: INFO: got data: { "image": "nautilus.jpg" } Aug 10 00:44:15.244: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 10 00:44:15.244: INFO: update-demo-nautilus-8pbpr is verified up and running Aug 10 00:44:15.244: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfbnn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8768' Aug 10 00:44:15.341: INFO: stderr: "" Aug 10 00:44:15.341: INFO: stdout: "true" Aug 10 00:44:15.341: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sfbnn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8768' Aug 10 00:44:15.438: INFO: stderr: "" Aug 10 00:44:15.438: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 10 00:44:15.438: INFO: validating pod update-demo-nautilus-sfbnn Aug 10 00:44:15.441: INFO: got data: { "image": "nautilus.jpg" } Aug 10 00:44:15.441: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 10 00:44:15.441: INFO: update-demo-nautilus-sfbnn is verified up and running STEP: using delete to clean up resources Aug 10 00:44:15.441: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8768' Aug 10 00:44:15.547: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 10 00:44:15.547: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 10 00:44:15.547: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8768' Aug 10 00:44:15.654: INFO: stderr: "No resources found in kubectl-8768 namespace.\n" Aug 10 00:44:15.654: INFO: stdout: "" Aug 10 00:44:15.654: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8768 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 10 00:44:15.761: INFO: stderr: "" Aug 10 00:44:15.761: INFO: stdout: "update-demo-nautilus-8pbpr\nupdate-demo-nautilus-sfbnn\n" Aug 10 00:44:16.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8768' Aug 10 00:44:16.487: INFO: stderr: "No resources found in kubectl-8768 namespace.\n" Aug 10 00:44:16.487: INFO: stdout: "" Aug 10 00:44:16.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8768 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 10 00:44:16.619: INFO: stderr: "" Aug 10 00:44:16.619: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:44:16.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8768" for this suite. • [SLOW TEST:25.998 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":236,"skipped":3928,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:44:16.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0810 00:44:26.906605 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 10 00:45:28.927: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:45:28.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6503" for this suite. • [SLOW TEST:72.307 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":237,"skipped":3986,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:45:28.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-tfzm STEP: Creating a pod to test atomic-volume-subpath Aug 10 00:45:29.084: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-tfzm" in namespace "subpath-6666" to be "Succeeded or Failed" Aug 10 00:45:29.110: INFO: Pod "pod-subpath-test-projected-tfzm": Phase="Pending", Reason="", readiness=false. Elapsed: 25.897576ms Aug 10 00:45:31.114: INFO: Pod "pod-subpath-test-projected-tfzm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030109309s Aug 10 00:45:33.143: INFO: Pod "pod-subpath-test-projected-tfzm": Phase="Running", Reason="", readiness=true. Elapsed: 4.059460609s Aug 10 00:45:35.148: INFO: Pod "pod-subpath-test-projected-tfzm": Phase="Running", Reason="", readiness=true. Elapsed: 6.063963785s Aug 10 00:45:37.153: INFO: Pod "pod-subpath-test-projected-tfzm": Phase="Running", Reason="", readiness=true. Elapsed: 8.068587005s Aug 10 00:45:39.157: INFO: Pod "pod-subpath-test-projected-tfzm": Phase="Running", Reason="", readiness=true. Elapsed: 10.073036096s Aug 10 00:45:41.162: INFO: Pod "pod-subpath-test-projected-tfzm": Phase="Running", Reason="", readiness=true. Elapsed: 12.077755315s Aug 10 00:45:43.166: INFO: Pod "pod-subpath-test-projected-tfzm": Phase="Running", Reason="", readiness=true. Elapsed: 14.081841175s Aug 10 00:45:45.170: INFO: Pod "pod-subpath-test-projected-tfzm": Phase="Running", Reason="", readiness=true. Elapsed: 16.086477895s Aug 10 00:45:47.175: INFO: Pod "pod-subpath-test-projected-tfzm": Phase="Running", Reason="", readiness=true. Elapsed: 18.091262612s Aug 10 00:45:49.180: INFO: Pod "pod-subpath-test-projected-tfzm": Phase="Running", Reason="", readiness=true. Elapsed: 20.095737735s Aug 10 00:45:51.184: INFO: Pod "pod-subpath-test-projected-tfzm": Phase="Running", Reason="", readiness=true. Elapsed: 22.099641125s Aug 10 00:45:53.188: INFO: Pod "pod-subpath-test-projected-tfzm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.104192289s STEP: Saw pod success Aug 10 00:45:53.188: INFO: Pod "pod-subpath-test-projected-tfzm" satisfied condition "Succeeded or Failed" Aug 10 00:45:53.191: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-tfzm container test-container-subpath-projected-tfzm: STEP: delete the pod Aug 10 00:45:53.227: INFO: Waiting for pod pod-subpath-test-projected-tfzm to disappear Aug 10 00:45:53.251: INFO: Pod pod-subpath-test-projected-tfzm no longer exists STEP: Deleting pod pod-subpath-test-projected-tfzm Aug 10 00:45:53.251: INFO: Deleting pod "pod-subpath-test-projected-tfzm" in namespace "subpath-6666" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:45:53.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6666" for this suite. • [SLOW TEST:24.356 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":238,"skipped":3987,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:45:53.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Aug 10 00:45:53.346: INFO: PodSpec: initContainers in spec.initContainers Aug 10 00:46:45.807: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-5907814a-9675-46b8-bcb9-ab37a71bf07b", GenerateName:"", Namespace:"init-container-6906", SelfLink:"/api/v1/namespaces/init-container-6906/pods/pod-init-5907814a-9675-46b8-bcb9-ab37a71bf07b", UID:"f783b5f5-b087-465b-989b-400b853af6b3", ResourceVersion:"5794378", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732617153, loc:(*time.Location)(0x7e34b60)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"346408568"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00232a920), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00232a980)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00232a9e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00232aa40)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8sdv4", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004462100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8sdv4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8sdv4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8sdv4", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0011fa2e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002aae150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0011fa5a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0011fa600)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0011fa608), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0011fa60c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00218c060), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617153, loc:(*time.Location)(0x7e34b60)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617153, loc:(*time.Location)(0x7e34b60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617153, loc:(*time.Location)(0x7e34b60)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617153, loc:(*time.Location)(0x7e34b60)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.12", PodIP:"10.244.2.88", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.88"}}, StartTime:(*v1.Time)(0xc00232aaa0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002aae230)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002aae2a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://6f99b457d02b7f9a538baebd96d001fbff6208681529dfdf21fa29aef5268d05", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00232ac80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00232ab60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc0011fa68f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:46:45.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6906" for this suite. • [SLOW TEST:52.611 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":239,"skipped":3997,"failed":0} [sig-api-machinery] server version should find the server version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:46:45.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Aug 10 00:46:45.971: INFO: Major version: 1 STEP: Confirm minor version Aug 10 00:46:45.971: INFO: cleanMinorVersion: 19 Aug 10 00:46:45.971: INFO: Minor version: 19+ [AfterEach] [sig-api-machinery] server version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:46:45.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-9194" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":240,"skipped":3997,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:46:45.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:46:46.057: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-099014ff-c9d0-4def-ae8d-c4b8c6c23b87" in namespace "security-context-test-3630" to be "Succeeded or Failed" Aug 10 00:46:46.061: INFO: Pod "alpine-nnp-false-099014ff-c9d0-4def-ae8d-c4b8c6c23b87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063099ms Aug 10 00:46:48.066: INFO: Pod "alpine-nnp-false-099014ff-c9d0-4def-ae8d-c4b8c6c23b87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008705901s Aug 10 00:46:50.071: INFO: Pod "alpine-nnp-false-099014ff-c9d0-4def-ae8d-c4b8c6c23b87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013150041s Aug 10 00:46:50.071: INFO: Pod "alpine-nnp-false-099014ff-c9d0-4def-ae8d-c4b8c6c23b87" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:46:50.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3630" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":241,"skipped":4006,"failed":0} SSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:46:50.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:46:50.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4736" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":242,"skipped":4010,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:46:50.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:46:51.012: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:46:53.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617211, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617211, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617211, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617210, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:46:56.060: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:46:56.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3052" for this suite. STEP: Destroying namespace "webhook-3052-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.492 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":243,"skipped":4020,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:46:56.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:46:56.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3222" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":244,"skipped":4024,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:46:56.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:46:56.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2781' Aug 10 00:46:57.182: INFO: stderr: "" Aug 10 00:46:57.182: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Aug 10 00:46:57.183: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2781' Aug 10 00:46:57.716: INFO: stderr: "" Aug 10 00:46:57.716: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Aug 10 00:46:58.721: INFO: Selector matched 1 pods for map[app:agnhost] Aug 10 00:46:58.721: INFO: Found 0 / 1 Aug 10 00:46:59.772: INFO: Selector matched 1 pods for map[app:agnhost] Aug 10 00:46:59.772: INFO: Found 0 / 1 Aug 10 00:47:00.721: INFO: Selector matched 1 pods for map[app:agnhost] Aug 10 00:47:00.721: INFO: Found 0 / 1 Aug 10 00:47:01.736: INFO: Selector matched 1 pods for map[app:agnhost] Aug 10 00:47:01.736: INFO: Found 1 / 1 Aug 10 00:47:01.736: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 10 00:47:01.747: INFO: Selector matched 1 pods for map[app:agnhost] Aug 10 00:47:01.747: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 10 00:47:01.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe pod agnhost-primary-fw6p6 --namespace=kubectl-2781' Aug 10 00:47:01.858: INFO: stderr: "" Aug 10 00:47:01.858: INFO: stdout: "Name: agnhost-primary-fw6p6\nNamespace: kubectl-2781\nPriority: 0\nNode: latest-worker2/172.18.0.12\nStart Time: Mon, 10 Aug 2020 00:46:57 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.91\nIPs:\n IP: 10.244.2.91\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://0c6ff82284cc274c753ed1d968b1c03e86926b43c9122a635bdfbd4a5b8703ae\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 10 Aug 2020 00:47:00 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-8ngt6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-8ngt6:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-8ngt6\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s Successfully assigned kubectl-2781/agnhost-primary-fw6p6 to latest-worker2\n Normal Pulled 3s kubelet, latest-worker2 Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-primary\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-primary\n" Aug 10 00:47:01.858: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-2781' Aug 10 00:47:02.067: INFO: stderr: "" Aug 10 00:47:02.067: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2781\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-primary-fw6p6\n" Aug 10 00:47:02.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-2781' Aug 10 00:47:02.217: INFO: stderr: "" Aug 10 00:47:02.217: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2781\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.107.239.240\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.91:6379\nSession Affinity: None\nEvents: \n" Aug 10 00:47:02.220: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe node latest-control-plane' Aug 10 00:47:02.360: INFO: stderr: "" Aug 10 00:47:02.360: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 19 Jul 2020 21:38:12 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Mon, 10 Aug 2020 00:46:53 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 10 Aug 2020 00:46:53 +0000 Sun, 19 Jul 2020 21:38:08 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 10 Aug 2020 00:46:53 +0000 Sun, 19 Jul 2020 21:38:08 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 10 Aug 2020 00:46:53 +0000 Sun, 19 Jul 2020 21:38:08 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 10 Aug 2020 00:46:53 +0000 Sun, 19 Jul 2020 21:39:43 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: e756079c6ff042fb9f9f1838b420a0a5\n System UUID: 397b219b-882b-4fb6-87c8-e536d116b866\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version: v1.19.0-rc.1\n Kube-Proxy Version: v1.19.0-rc.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21d\n kube-system kindnet-mg7cm 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 21d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 21d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 21d\n kube-system kube-proxy-gb68f 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 21d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Aug 10 00:47:02.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config describe namespace kubectl-2781' Aug 10 00:47:02.478: INFO: stderr: "" Aug 10 00:47:02.478: INFO: stdout: "Name: kubectl-2781\nLabels: e2e-framework=kubectl\n e2e-run=b084a0d4-e762-408a-9ee2-94ee5fd82e54\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:47:02.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2781" for this suite. • [SLOW TEST:5.666 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":245,"skipped":4040,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:47:02.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 10 00:47:06.614: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:47:06.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4184" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":246,"skipped":4104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:47:06.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:47:07.438: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:47:09.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617227, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617227, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617227, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617227, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:47:12.485: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:47:12.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8531-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:47:13.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8063" for this suite. STEP: Destroying namespace "webhook-8063-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.217 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":247,"skipped":4133,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:47:13.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0810 00:47:15.128624 8 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Aug 10 00:48:17.148: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:48:17.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9351" for this suite. • [SLOW TEST:63.288 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":248,"skipped":4155,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:48:17.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 10 00:48:17.279: INFO: Waiting up to 5m0s for pod "pod-07128d82-8263-448e-a14b-5ff71346553f" in namespace "emptydir-9438" to be "Succeeded or Failed" Aug 10 00:48:17.302: INFO: Pod "pod-07128d82-8263-448e-a14b-5ff71346553f": Phase="Pending", Reason="", readiness=false. Elapsed: 22.662156ms Aug 10 00:48:19.306: INFO: Pod "pod-07128d82-8263-448e-a14b-5ff71346553f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027027788s Aug 10 00:48:21.311: INFO: Pod "pod-07128d82-8263-448e-a14b-5ff71346553f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031374632s STEP: Saw pod success Aug 10 00:48:21.311: INFO: Pod "pod-07128d82-8263-448e-a14b-5ff71346553f" satisfied condition "Succeeded or Failed" Aug 10 00:48:21.314: INFO: Trying to get logs from node latest-worker2 pod pod-07128d82-8263-448e-a14b-5ff71346553f container test-container: STEP: delete the pod Aug 10 00:48:21.397: INFO: Waiting for pod pod-07128d82-8263-448e-a14b-5ff71346553f to disappear Aug 10 00:48:21.407: INFO: Pod pod-07128d82-8263-448e-a14b-5ff71346553f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:48:21.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9438" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":249,"skipped":4158,"failed":0} S ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:48:21.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-6479 STEP: creating service affinity-clusterip-transition in namespace services-6479 STEP: creating replication controller affinity-clusterip-transition in namespace services-6479 I0810 00:48:21.582552 8 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-6479, replica count: 3 I0810 00:48:24.633761 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 00:48:27.634102 8 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 10 00:48:27.640: INFO: Creating new exec pod Aug 10 00:48:32.659: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-6479 execpod-affinity26g2m -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Aug 10 00:48:32.906: INFO: stderr: "I0810 00:48:32.803837 3121 log.go:181] (0xc00061b3f0) (0xc000b03680) Create stream\nI0810 00:48:32.803913 3121 log.go:181] (0xc00061b3f0) (0xc000b03680) Stream added, broadcasting: 1\nI0810 00:48:32.810031 3121 log.go:181] (0xc00061b3f0) Reply frame received for 1\nI0810 00:48:32.810095 3121 log.go:181] (0xc00061b3f0) (0xc000aed040) Create stream\nI0810 00:48:32.810120 3121 log.go:181] (0xc00061b3f0) (0xc000aed040) Stream added, broadcasting: 3\nI0810 00:48:32.811092 3121 log.go:181] (0xc00061b3f0) Reply frame received for 3\nI0810 00:48:32.811159 3121 log.go:181] (0xc00061b3f0) (0xc0004343c0) Create stream\nI0810 00:48:32.811190 3121 log.go:181] (0xc00061b3f0) (0xc0004343c0) Stream added, broadcasting: 5\nI0810 00:48:32.812420 3121 log.go:181] (0xc00061b3f0) Reply frame received for 5\nI0810 00:48:32.899356 3121 log.go:181] (0xc00061b3f0) Data frame received for 3\nI0810 00:48:32.899415 3121 log.go:181] (0xc000aed040) (3) Data frame handling\nI0810 00:48:32.899442 3121 log.go:181] (0xc00061b3f0) Data frame received for 5\nI0810 00:48:32.899452 3121 log.go:181] (0xc0004343c0) (5) Data frame handling\nI0810 00:48:32.899465 3121 log.go:181] (0xc0004343c0) (5) Data frame sent\nI0810 00:48:32.899476 3121 log.go:181] (0xc00061b3f0) Data frame received for 5\nI0810 00:48:32.899485 3121 log.go:181] (0xc0004343c0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI0810 00:48:32.901161 3121 log.go:181] (0xc00061b3f0) Data frame received for 1\nI0810 00:48:32.901200 3121 log.go:181] (0xc000b03680) (1) Data frame handling\nI0810 00:48:32.901219 3121 log.go:181] (0xc000b03680) (1) Data frame sent\nI0810 00:48:32.901245 3121 log.go:181] (0xc00061b3f0) (0xc000b03680) Stream removed, broadcasting: 1\nI0810 00:48:32.901284 3121 log.go:181] (0xc00061b3f0) Go away received\nI0810 00:48:32.901604 3121 log.go:181] (0xc00061b3f0) (0xc000b03680) Stream removed, broadcasting: 1\nI0810 00:48:32.901618 3121 log.go:181] (0xc00061b3f0) (0xc000aed040) Stream removed, broadcasting: 3\nI0810 00:48:32.901626 3121 log.go:181] (0xc00061b3f0) (0xc0004343c0) Stream removed, broadcasting: 5\n" Aug 10 00:48:32.906: INFO: stdout: "" Aug 10 00:48:32.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-6479 execpod-affinity26g2m -- /bin/sh -x -c nc -zv -t -w 2 10.99.192.41 80' Aug 10 00:48:33.120: INFO: stderr: "I0810 00:48:33.041925 3139 log.go:181] (0xc000c9b550) (0xc000ad9540) Create stream\nI0810 00:48:33.041977 3139 log.go:181] (0xc000c9b550) (0xc000ad9540) Stream added, broadcasting: 1\nI0810 00:48:33.047394 3139 log.go:181] (0xc000c9b550) Reply frame received for 1\nI0810 00:48:33.047437 3139 log.go:181] (0xc000c9b550) (0xc000ac52c0) Create stream\nI0810 00:48:33.047449 3139 log.go:181] (0xc000c9b550) (0xc000ac52c0) Stream added, broadcasting: 3\nI0810 00:48:33.048303 3139 log.go:181] (0xc000c9b550) Reply frame received for 3\nI0810 00:48:33.048342 3139 log.go:181] (0xc000c9b550) (0xc000abf4a0) Create stream\nI0810 00:48:33.048352 3139 log.go:181] (0xc000c9b550) (0xc000abf4a0) Stream added, broadcasting: 5\nI0810 00:48:33.049421 3139 log.go:181] (0xc000c9b550) Reply frame received for 5\nI0810 00:48:33.112320 3139 log.go:181] (0xc000c9b550) Data frame received for 3\nI0810 00:48:33.112350 3139 log.go:181] (0xc000ac52c0) (3) Data frame handling\nI0810 00:48:33.112387 3139 log.go:181] (0xc000c9b550) Data frame received for 5\nI0810 00:48:33.112401 3139 log.go:181] (0xc000abf4a0) (5) Data frame handling\nI0810 00:48:33.112414 3139 log.go:181] (0xc000abf4a0) (5) Data frame sent\nI0810 00:48:33.112429 3139 log.go:181] (0xc000c9b550) Data frame received for 5\nI0810 00:48:33.112437 3139 log.go:181] (0xc000abf4a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.192.41 80\nConnection to 10.99.192.41 80 port [tcp/http] succeeded!\nI0810 00:48:33.114043 3139 log.go:181] (0xc000c9b550) Data frame received for 1\nI0810 00:48:33.114064 3139 log.go:181] (0xc000ad9540) (1) Data frame handling\nI0810 00:48:33.114076 3139 log.go:181] (0xc000ad9540) (1) Data frame sent\nI0810 00:48:33.114100 3139 log.go:181] (0xc000c9b550) (0xc000ad9540) Stream removed, broadcasting: 1\nI0810 00:48:33.114295 3139 log.go:181] (0xc000c9b550) Go away received\nI0810 00:48:33.114458 3139 log.go:181] (0xc000c9b550) (0xc000ad9540) Stream removed, broadcasting: 1\nI0810 00:48:33.114475 3139 log.go:181] (0xc000c9b550) (0xc000ac52c0) Stream removed, broadcasting: 3\nI0810 00:48:33.114484 3139 log.go:181] (0xc000c9b550) (0xc000abf4a0) Stream removed, broadcasting: 5\n" Aug 10 00:48:33.120: INFO: stdout: "" Aug 10 00:48:33.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-6479 execpod-affinity26g2m -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.99.192.41:80/ ; done' Aug 10 00:48:33.447: INFO: stderr: "I0810 00:48:33.282572 3157 log.go:181] (0xc000a73080) (0xc000fac3c0) Create stream\nI0810 00:48:33.282635 3157 log.go:181] (0xc000a73080) (0xc000fac3c0) Stream added, broadcasting: 1\nI0810 00:48:33.287893 3157 log.go:181] (0xc000a73080) Reply frame received for 1\nI0810 00:48:33.287942 3157 log.go:181] (0xc000a73080) (0xc0008a1180) Create stream\nI0810 00:48:33.287963 3157 log.go:181] (0xc000a73080) (0xc0008a1180) Stream added, broadcasting: 3\nI0810 00:48:33.288900 3157 log.go:181] (0xc000a73080) Reply frame received for 3\nI0810 00:48:33.288929 3157 log.go:181] (0xc000a73080) (0xc000678140) Create stream\nI0810 00:48:33.288938 3157 log.go:181] (0xc000a73080) (0xc000678140) Stream added, broadcasting: 5\nI0810 00:48:33.289702 3157 log.go:181] (0xc000a73080) Reply frame received for 5\nI0810 00:48:33.353328 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.353374 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.353390 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.353418 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.353428 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.353439 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.359630 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.359643 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.359650 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.360239 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.360255 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.360265 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.360293 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.360310 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.360322 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.364429 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.364443 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.364455 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.364931 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.364944 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.364951 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.365079 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.365098 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.365111 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.370365 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.370386 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.370401 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.370840 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.370859 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.370874 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.370896 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.370914 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.370930 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.375431 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.375446 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.375462 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.375788 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.375808 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.375816 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.375827 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.375860 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.375876 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.380186 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.380203 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.380220 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.380862 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.380896 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.380917 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.381074 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.381093 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.381106 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.386189 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.386211 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.386227 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.386762 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.386796 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.386806 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.386818 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.386824 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.386835 3157 log.go:181] (0xc000678140) (5) Data frame sent\nI0810 00:48:33.386855 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.386871 3157 log.go:181] (0xc000678140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.386887 3157 log.go:181] (0xc000678140) (5) Data frame sent\nI0810 00:48:33.390688 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.390707 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.390741 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.391289 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.391303 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.391312 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.391321 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.391357 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.391372 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.395148 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.395165 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.395172 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.395622 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.395649 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.395668 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.395689 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.395703 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.395724 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.400903 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.400923 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.400936 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.401488 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.401504 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.401516 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.401525 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.401532 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.401547 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.405404 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.405431 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.405460 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.405913 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.405936 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.405951 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.405972 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.405981 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.405990 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.410335 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.410360 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.410379 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.411244 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.411275 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.411289 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.411420 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.411445 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.411459 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.416142 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.416166 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.416185 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.416601 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.416628 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.416648 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.416804 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.416828 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.416837 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.421193 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.421214 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.421238 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.421697 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.421712 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.421720 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.421771 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.421792 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.421813 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.428301 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.428323 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.428334 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.428565 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.428579 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.428587 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.428598 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.428606 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.428616 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.433372 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.433390 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.433407 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.433873 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.433895 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.433907 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.433917 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.433929 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.433938 3157 log.go:181] (0xc000678140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.438686 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.438707 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.438727 3157 log.go:181] (0xc0008a1180) (3) Data frame sent\nI0810 00:48:33.439076 3157 log.go:181] (0xc000a73080) Data frame received for 5\nI0810 00:48:33.439095 3157 log.go:181] (0xc000678140) (5) Data frame handling\nI0810 00:48:33.439244 3157 log.go:181] (0xc000a73080) Data frame received for 3\nI0810 00:48:33.439256 3157 log.go:181] (0xc0008a1180) (3) Data frame handling\nI0810 00:48:33.440711 3157 log.go:181] (0xc000a73080) Data frame received for 1\nI0810 00:48:33.440832 3157 log.go:181] (0xc000fac3c0) (1) Data frame handling\nI0810 00:48:33.440848 3157 log.go:181] (0xc000fac3c0) (1) Data frame sent\nI0810 00:48:33.440860 3157 log.go:181] (0xc000a73080) (0xc000fac3c0) Stream removed, broadcasting: 1\nI0810 00:48:33.440876 3157 log.go:181] (0xc000a73080) Go away received\nI0810 00:48:33.441227 3157 log.go:181] (0xc000a73080) (0xc000fac3c0) Stream removed, broadcasting: 1\nI0810 00:48:33.441249 3157 log.go:181] (0xc000a73080) (0xc0008a1180) Stream removed, broadcasting: 3\nI0810 00:48:33.441260 3157 log.go:181] (0xc000a73080) (0xc000678140) Stream removed, broadcasting: 5\n" Aug 10 00:48:33.447: INFO: stdout: "\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-xst7m\naffinity-clusterip-transition-gv8cg\naffinity-clusterip-transition-gv8cg\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-xst7m\naffinity-clusterip-transition-gv8cg\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-xst7m\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-gv8cg\naffinity-clusterip-transition-xst7m\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-xst7m\naffinity-clusterip-transition-gv8cg" Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-xst7m Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-gv8cg Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-gv8cg Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-xst7m Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-gv8cg Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-xst7m Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-gv8cg Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-xst7m Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-xst7m Aug 10 00:48:33.448: INFO: Received response from host: affinity-clusterip-transition-gv8cg Aug 10 00:48:33.472: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-6479 execpod-affinity26g2m -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.99.192.41:80/ ; done' Aug 10 00:48:33.802: INFO: stderr: "I0810 00:48:33.622212 3175 log.go:181] (0xc00017efd0) (0xc00099fcc0) Create stream\nI0810 00:48:33.622275 3175 log.go:181] (0xc00017efd0) (0xc00099fcc0) Stream added, broadcasting: 1\nI0810 00:48:33.626527 3175 log.go:181] (0xc00017efd0) Reply frame received for 1\nI0810 00:48:33.626569 3175 log.go:181] (0xc00017efd0) (0xc00099f4a0) Create stream\nI0810 00:48:33.626582 3175 log.go:181] (0xc00017efd0) (0xc00099f4a0) Stream added, broadcasting: 3\nI0810 00:48:33.627510 3175 log.go:181] (0xc00017efd0) Reply frame received for 3\nI0810 00:48:33.627551 3175 log.go:181] (0xc00017efd0) (0xc00099f540) Create stream\nI0810 00:48:33.627564 3175 log.go:181] (0xc00017efd0) (0xc00099f540) Stream added, broadcasting: 5\nI0810 00:48:33.628528 3175 log.go:181] (0xc00017efd0) Reply frame received for 5\nI0810 00:48:33.701125 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.701156 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.701169 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.701188 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.701193 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.701198 3175 log.go:181] (0xc00099f540) (5) Data frame sent\nI0810 00:48:33.701204 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.701208 3175 log.go:181] (0xc00099f540) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.701237 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.701280 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.701296 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.701313 3175 log.go:181] (0xc00099f540) (5) Data frame sent\nI0810 00:48:33.706637 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.706661 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.706682 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.707144 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.707179 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.707195 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.707210 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.707219 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.707228 3175 log.go:181] (0xc00099f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.713736 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.713761 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.713774 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.714326 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.714354 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.714367 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.714385 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.714394 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.714408 3175 log.go:181] (0xc00099f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.718776 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.718797 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.718817 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.719327 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.719361 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.719380 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.719401 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.719411 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.719436 3175 log.go:181] (0xc00099f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.725613 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.725640 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.725657 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.726242 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.726272 3175 log.go:181] (0xc00099f540) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.726292 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.726316 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.726333 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.726361 3175 log.go:181] (0xc00099f540) (5) Data frame sent\nI0810 00:48:33.732212 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.732239 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.732258 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.733116 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.733151 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.733165 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.733180 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.733203 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.733225 3175 log.go:181] (0xc00099f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.738298 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.738324 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.738341 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.738825 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.738844 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.738865 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.738892 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.738915 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.738935 3175 log.go:181] (0xc00099f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.743737 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.743750 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.743763 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.744462 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.744478 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.744492 3175 log.go:181] (0xc00099f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.744521 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.744542 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.744557 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.750434 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.750458 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.750486 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.751034 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.751057 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.751085 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.751109 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.751126 3175 log.go:181] (0xc00099f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.751149 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.755729 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.755751 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.755776 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.756271 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.756298 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.756332 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.756353 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.756369 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.756390 3175 log.go:181] (0xc00099f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.762953 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.762971 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.762981 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.763864 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.763884 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.763896 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.763927 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.763948 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.763964 3175 log.go:181] (0xc00099f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.767629 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.767646 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.767659 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.768074 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.768095 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.768106 3175 log.go:181] (0xc00099f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.768118 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.768124 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.768132 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.774176 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.774208 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.774239 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.774857 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.774889 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.774902 3175 log.go:181] (0xc00099f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.774919 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.774940 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.774955 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.781763 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.781790 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.781810 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.782723 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.782748 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.782760 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.782776 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.782786 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.782796 3175 log.go:181] (0xc00099f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.788142 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.788171 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.788204 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.788633 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.788665 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.788715 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.788872 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.788899 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.788919 3175 log.go:181] (0xc00099f540) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.99.192.41:80/\nI0810 00:48:33.792580 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.792599 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.792614 3175 log.go:181] (0xc00099f4a0) (3) Data frame sent\nI0810 00:48:33.793847 3175 log.go:181] (0xc00017efd0) Data frame received for 5\nI0810 00:48:33.793894 3175 log.go:181] (0xc00099f540) (5) Data frame handling\nI0810 00:48:33.793919 3175 log.go:181] (0xc00017efd0) Data frame received for 3\nI0810 00:48:33.793939 3175 log.go:181] (0xc00099f4a0) (3) Data frame handling\nI0810 00:48:33.795818 3175 log.go:181] (0xc00017efd0) Data frame received for 1\nI0810 00:48:33.795851 3175 log.go:181] (0xc00099fcc0) (1) Data frame handling\nI0810 00:48:33.795889 3175 log.go:181] (0xc00099fcc0) (1) Data frame sent\nI0810 00:48:33.795921 3175 log.go:181] (0xc00017efd0) (0xc00099fcc0) Stream removed, broadcasting: 1\nI0810 00:48:33.795946 3175 log.go:181] (0xc00017efd0) Go away received\nI0810 00:48:33.796478 3175 log.go:181] (0xc00017efd0) (0xc00099fcc0) Stream removed, broadcasting: 1\nI0810 00:48:33.796501 3175 log.go:181] (0xc00017efd0) (0xc00099f4a0) Stream removed, broadcasting: 3\nI0810 00:48:33.796527 3175 log.go:181] (0xc00017efd0) (0xc00099f540) Stream removed, broadcasting: 5\n" Aug 10 00:48:33.803: INFO: stdout: "\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6\naffinity-clusterip-transition-v2zk6" Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Received response from host: affinity-clusterip-transition-v2zk6 Aug 10 00:48:33.803: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-6479, will wait for the garbage collector to delete the pods Aug 10 00:48:33.907: INFO: Deleting ReplicationController affinity-clusterip-transition took: 6.025335ms Aug 10 00:48:34.507: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 600.230757ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:48:44.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6479" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:22.788 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":250,"skipped":4159,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:48:44.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:48:44.763: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:48:46.779: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617324, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617324, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617324, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617324, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:48:49.830: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:48:50.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4575" for this suite. STEP: Destroying namespace "webhook-4575-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.008 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":251,"skipped":4161,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:48:50.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2947.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2947.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 10 00:48:56.482: INFO: DNS probes using dns-2947/dns-test-59d977c9-5b53-4c0f-9221-14ce743a5b42 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:48:56.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2947" for this suite. • [SLOW TEST:6.385 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":252,"skipped":4164,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:48:56.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-292 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 10 00:48:57.042: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 10 00:48:57.133: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 00:48:59.163: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 00:49:01.138: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 10 00:49:03.137: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:49:05.137: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:49:07.137: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:49:09.137: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:49:11.137: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:49:13.137: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:49:15.137: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 10 00:49:17.137: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 10 00:49:17.141: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 10 00:49:21.175: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.102:8080/dial?request=hostname&protocol=udp&host=10.244.1.109&port=8081&tries=1'] Namespace:pod-network-test-292 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:49:21.175: INFO: >>> kubeConfig: /root/.kube/config I0810 00:49:21.215073 8 log.go:181] (0xc000e1e580) (0xc002c25cc0) Create stream I0810 00:49:21.215106 8 log.go:181] (0xc000e1e580) (0xc002c25cc0) Stream added, broadcasting: 1 I0810 00:49:21.217838 8 log.go:181] (0xc000e1e580) Reply frame received for 1 I0810 00:49:21.217889 8 log.go:181] (0xc000e1e580) (0xc002c25d60) Create stream I0810 00:49:21.217905 8 log.go:181] (0xc000e1e580) (0xc002c25d60) Stream added, broadcasting: 3 I0810 00:49:21.219060 8 log.go:181] (0xc000e1e580) Reply frame received for 3 I0810 00:49:21.219106 8 log.go:181] (0xc000e1e580) (0xc001d28960) Create stream I0810 00:49:21.219122 8 log.go:181] (0xc000e1e580) (0xc001d28960) Stream added, broadcasting: 5 I0810 00:49:21.220036 8 log.go:181] (0xc000e1e580) Reply frame received for 5 I0810 00:49:21.325499 8 log.go:181] (0xc000e1e580) Data frame received for 3 I0810 00:49:21.325544 8 log.go:181] (0xc002c25d60) (3) Data frame handling I0810 00:49:21.325579 8 log.go:181] (0xc002c25d60) (3) Data frame sent I0810 00:49:21.326278 8 log.go:181] (0xc000e1e580) Data frame received for 5 I0810 00:49:21.326339 8 log.go:181] (0xc001d28960) (5) Data frame handling I0810 00:49:21.326820 8 log.go:181] (0xc000e1e580) Data frame received for 3 I0810 00:49:21.326845 8 log.go:181] (0xc002c25d60) (3) Data frame handling I0810 00:49:21.328266 8 log.go:181] (0xc000e1e580) Data frame received for 1 I0810 00:49:21.328303 8 log.go:181] (0xc002c25cc0) (1) Data frame handling I0810 00:49:21.328354 8 log.go:181] (0xc002c25cc0) (1) Data frame sent I0810 00:49:21.328380 8 log.go:181] (0xc000e1e580) (0xc002c25cc0) Stream removed, broadcasting: 1 I0810 00:49:21.328518 8 log.go:181] (0xc000e1e580) (0xc002c25cc0) Stream removed, broadcasting: 1 I0810 00:49:21.328553 8 log.go:181] (0xc000e1e580) (0xc002c25d60) Stream removed, broadcasting: 3 I0810 00:49:21.328670 8 log.go:181] (0xc000e1e580) Go away received I0810 00:49:21.328919 8 log.go:181] (0xc000e1e580) (0xc001d28960) Stream removed, broadcasting: 5 Aug 10 00:49:21.328: INFO: Waiting for responses: map[] Aug 10 00:49:21.332: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.102:8080/dial?request=hostname&protocol=udp&host=10.244.2.101&port=8081&tries=1'] Namespace:pod-network-test-292 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:49:21.332: INFO: >>> kubeConfig: /root/.kube/config I0810 00:49:21.367341 8 log.go:181] (0xc000e1ed10) (0xc003842280) Create stream I0810 00:49:21.367367 8 log.go:181] (0xc000e1ed10) (0xc003842280) Stream added, broadcasting: 1 I0810 00:49:21.369697 8 log.go:181] (0xc000e1ed10) Reply frame received for 1 I0810 00:49:21.369742 8 log.go:181] (0xc000e1ed10) (0xc000d05860) Create stream I0810 00:49:21.369755 8 log.go:181] (0xc000e1ed10) (0xc000d05860) Stream added, broadcasting: 3 I0810 00:49:21.370605 8 log.go:181] (0xc000e1ed10) Reply frame received for 3 I0810 00:49:21.370638 8 log.go:181] (0xc000e1ed10) (0xc001d28e60) Create stream I0810 00:49:21.370652 8 log.go:181] (0xc000e1ed10) (0xc001d28e60) Stream added, broadcasting: 5 I0810 00:49:21.371411 8 log.go:181] (0xc000e1ed10) Reply frame received for 5 I0810 00:49:21.452534 8 log.go:181] (0xc000e1ed10) Data frame received for 3 I0810 00:49:21.452571 8 log.go:181] (0xc000d05860) (3) Data frame handling I0810 00:49:21.452596 8 log.go:181] (0xc000d05860) (3) Data frame sent I0810 00:49:21.453079 8 log.go:181] (0xc000e1ed10) Data frame received for 3 I0810 00:49:21.453092 8 log.go:181] (0xc000d05860) (3) Data frame handling I0810 00:49:21.453202 8 log.go:181] (0xc000e1ed10) Data frame received for 5 I0810 00:49:21.453226 8 log.go:181] (0xc001d28e60) (5) Data frame handling I0810 00:49:21.454887 8 log.go:181] (0xc000e1ed10) Data frame received for 1 I0810 00:49:21.454900 8 log.go:181] (0xc003842280) (1) Data frame handling I0810 00:49:21.454914 8 log.go:181] (0xc003842280) (1) Data frame sent I0810 00:49:21.455034 8 log.go:181] (0xc000e1ed10) (0xc003842280) Stream removed, broadcasting: 1 I0810 00:49:21.455152 8 log.go:181] (0xc000e1ed10) (0xc003842280) Stream removed, broadcasting: 1 I0810 00:49:21.455174 8 log.go:181] (0xc000e1ed10) (0xc000d05860) Stream removed, broadcasting: 3 I0810 00:49:21.455189 8 log.go:181] (0xc000e1ed10) (0xc001d28e60) Stream removed, broadcasting: 5 Aug 10 00:49:21.455: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 I0810 00:49:21.455274 8 log.go:181] (0xc000e1ed10) Go away received Aug 10 00:49:21.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-292" for this suite. • [SLOW TEST:24.864 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":253,"skipped":4177,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:49:21.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-c32c0387-48aa-48a0-a502-486e570db311 STEP: Creating a pod to test consume secrets Aug 10 00:49:21.670: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-581ce974-28f8-48a3-b316-fff6026bf22b" in namespace "projected-6905" to be "Succeeded or Failed" Aug 10 00:49:21.673: INFO: Pod "pod-projected-secrets-581ce974-28f8-48a3-b316-fff6026bf22b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.964482ms Aug 10 00:49:23.677: INFO: Pod "pod-projected-secrets-581ce974-28f8-48a3-b316-fff6026bf22b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007207288s Aug 10 00:49:25.681: INFO: Pod "pod-projected-secrets-581ce974-28f8-48a3-b316-fff6026bf22b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011078491s STEP: Saw pod success Aug 10 00:49:25.681: INFO: Pod "pod-projected-secrets-581ce974-28f8-48a3-b316-fff6026bf22b" satisfied condition "Succeeded or Failed" Aug 10 00:49:25.683: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-581ce974-28f8-48a3-b316-fff6026bf22b container projected-secret-volume-test: STEP: delete the pod Aug 10 00:49:25.731: INFO: Waiting for pod pod-projected-secrets-581ce974-28f8-48a3-b316-fff6026bf22b to disappear Aug 10 00:49:25.739: INFO: Pod pod-projected-secrets-581ce974-28f8-48a3-b316-fff6026bf22b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:49:25.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6905" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":254,"skipped":4187,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:49:25.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:49:26.561: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:49:28.572: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617366, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617366, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617366, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617366, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:49:30.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617366, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617366, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617366, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617366, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:49:33.622: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:49:45.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7508" for this suite. STEP: Destroying namespace "webhook-7508-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.156 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":255,"skipped":4194,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:49:45.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:49:46.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2494" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":256,"skipped":4230,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:49:46.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 10 00:49:50.149: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:49:50.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3259" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":257,"skipped":4279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:49:50.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 10 00:49:50.330: INFO: Waiting up to 5m0s for pod "pod-08a6c353-3668-42d1-8060-300985409e92" in namespace "emptydir-4193" to be "Succeeded or Failed" Aug 10 00:49:50.365: INFO: Pod "pod-08a6c353-3668-42d1-8060-300985409e92": Phase="Pending", Reason="", readiness=false. Elapsed: 35.499436ms Aug 10 00:49:52.370: INFO: Pod "pod-08a6c353-3668-42d1-8060-300985409e92": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039730164s Aug 10 00:49:54.374: INFO: Pod "pod-08a6c353-3668-42d1-8060-300985409e92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043701581s STEP: Saw pod success Aug 10 00:49:54.374: INFO: Pod "pod-08a6c353-3668-42d1-8060-300985409e92" satisfied condition "Succeeded or Failed" Aug 10 00:49:54.376: INFO: Trying to get logs from node latest-worker2 pod pod-08a6c353-3668-42d1-8060-300985409e92 container test-container: STEP: delete the pod Aug 10 00:49:54.453: INFO: Waiting for pod pod-08a6c353-3668-42d1-8060-300985409e92 to disappear Aug 10 00:49:54.471: INFO: Pod pod-08a6c353-3668-42d1-8060-300985409e92 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:49:54.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4193" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":258,"skipped":4315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:49:54.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-32d01836-dad2-4af4-ac2d-a731c27a043e STEP: Creating a pod to test consume configMaps Aug 10 00:49:54.538: INFO: Waiting up to 5m0s for pod "pod-configmaps-d5f267c8-8336-4653-807d-226bc0eb0d8d" in namespace "configmap-4448" to be "Succeeded or Failed" Aug 10 00:49:54.585: INFO: Pod "pod-configmaps-d5f267c8-8336-4653-807d-226bc0eb0d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 46.467196ms Aug 10 00:49:56.589: INFO: Pod "pod-configmaps-d5f267c8-8336-4653-807d-226bc0eb0d8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050779216s Aug 10 00:49:58.594: INFO: Pod "pod-configmaps-d5f267c8-8336-4653-807d-226bc0eb0d8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055286565s STEP: Saw pod success Aug 10 00:49:58.594: INFO: Pod "pod-configmaps-d5f267c8-8336-4653-807d-226bc0eb0d8d" satisfied condition "Succeeded or Failed" Aug 10 00:49:58.597: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-d5f267c8-8336-4653-807d-226bc0eb0d8d container configmap-volume-test: STEP: delete the pod Aug 10 00:49:58.632: INFO: Waiting for pod pod-configmaps-d5f267c8-8336-4653-807d-226bc0eb0d8d to disappear Aug 10 00:49:58.677: INFO: Pod pod-configmaps-d5f267c8-8336-4653-807d-226bc0eb0d8d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:49:58.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4448" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":259,"skipped":4341,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:49:58.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Aug 10 00:50:03.317: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9234 pod-service-account-b73f12e0-bd57-494d-834a-b7224d6ad324 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 10 00:50:06.583: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9234 pod-service-account-b73f12e0-bd57-494d-834a-b7224d6ad324 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 10 00:50:06.824: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9234 pod-service-account-b73f12e0-bd57-494d-834a-b7224d6ad324 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:50:07.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9234" for this suite. • [SLOW TEST:8.348 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":260,"skipped":4342,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:50:07.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:50:07.079: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 10 00:50:10.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4969 create -f -' Aug 10 00:50:13.864: INFO: stderr: "" Aug 10 00:50:13.864: INFO: stdout: "e2e-test-crd-publish-openapi-9675-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 10 00:50:13.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4969 delete e2e-test-crd-publish-openapi-9675-crds test-cr' Aug 10 00:50:13.977: INFO: stderr: "" Aug 10 00:50:13.977: INFO: stdout: "e2e-test-crd-publish-openapi-9675-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Aug 10 00:50:13.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4969 apply -f -' Aug 10 00:50:14.308: INFO: stderr: "" Aug 10 00:50:14.308: INFO: stdout: "e2e-test-crd-publish-openapi-9675-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 10 00:50:14.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4969 delete e2e-test-crd-publish-openapi-9675-crds test-cr' Aug 10 00:50:14.692: INFO: stderr: "" Aug 10 00:50:14.692: INFO: stdout: "e2e-test-crd-publish-openapi-9675-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Aug 10 00:50:14.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9675-crds' Aug 10 00:50:15.117: INFO: stderr: "" Aug 10 00:50:15.117: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9675-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:50:17.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4969" for this suite. • [SLOW TEST:10.109 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":261,"skipped":4367,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:50:17.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 10 00:50:17.334: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 10 00:50:22.346: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:50:22.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-973" for this suite. • [SLOW TEST:5.416 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":262,"skipped":4369,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:50:22.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:50:23.441: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:50:25.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617423, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617423, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617423, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617423, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:50:28.618: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:50:28.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3736-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:50:29.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-608" for this suite. STEP: Destroying namespace "webhook-608-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.538 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":263,"skipped":4369,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:50:30.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 10 00:50:31.012: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 10 00:50:33.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617431, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617431, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617431, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617430, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:50:36.061: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:50:36.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:50:37.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5232" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.278 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":264,"skipped":4381,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:50:37.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-f358fe14-5882-4984-84ca-bdef2841dd50 STEP: Creating a pod to test consume configMaps Aug 10 00:50:37.499: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-19792af8-81f6-40dd-bc8b-54d873cc917e" in namespace "projected-8547" to be "Succeeded or Failed" Aug 10 00:50:37.518: INFO: Pod "pod-projected-configmaps-19792af8-81f6-40dd-bc8b-54d873cc917e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.207402ms Aug 10 00:50:39.521: INFO: Pod "pod-projected-configmaps-19792af8-81f6-40dd-bc8b-54d873cc917e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021956704s Aug 10 00:50:41.535: INFO: Pod "pod-projected-configmaps-19792af8-81f6-40dd-bc8b-54d873cc917e": Phase="Running", Reason="", readiness=true. Elapsed: 4.035366833s Aug 10 00:50:43.542: INFO: Pod "pod-projected-configmaps-19792af8-81f6-40dd-bc8b-54d873cc917e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042202705s STEP: Saw pod success Aug 10 00:50:43.542: INFO: Pod "pod-projected-configmaps-19792af8-81f6-40dd-bc8b-54d873cc917e" satisfied condition "Succeeded or Failed" Aug 10 00:50:43.544: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-19792af8-81f6-40dd-bc8b-54d873cc917e container projected-configmap-volume-test: STEP: delete the pod Aug 10 00:50:43.583: INFO: Waiting for pod pod-projected-configmaps-19792af8-81f6-40dd-bc8b-54d873cc917e to disappear Aug 10 00:50:43.599: INFO: Pod pod-projected-configmaps-19792af8-81f6-40dd-bc8b-54d873cc917e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:50:43.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8547" for this suite. • [SLOW TEST:6.255 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":265,"skipped":4404,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:50:43.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 10 00:50:43.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-9478' Aug 10 00:50:43.831: INFO: stderr: "" Aug 10 00:50:43.831: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Aug 10 00:50:48.882: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-9478 -o json' Aug 10 00:50:48.985: INFO: stderr: "" Aug 10 00:50:48.985: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-10T00:50:43Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-10T00:50:43Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.114\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-08-10T00:50:47Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9478\",\n \"resourceVersion\": \"5796187\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9478/pods/e2e-test-httpd-pod\",\n \"uid\": \"7287a002-4267-4a7a-9714-1120bbde3efb\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-dns77\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-dns77\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-dns77\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-10T00:50:43Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-10T00:50:47Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-10T00:50:47Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-10T00:50:43Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://36caa0b3bb85d66378b462bfa4bcbfb037f284fd72942c25f7ff11263892b10c\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-10T00:50:46Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.114\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.114\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-10T00:50:43Z\"\n }\n}\n" STEP: replace the image in the pod Aug 10 00:50:48.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9478' Aug 10 00:50:49.365: INFO: stderr: "" Aug 10 00:50:49.366: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Aug 10 00:50:49.368: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9478' Aug 10 00:50:52.653: INFO: stderr: "" Aug 10 00:50:52.653: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:50:52.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9478" for this suite. • [SLOW TEST:9.125 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":266,"skipped":4416,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:50:52.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 10 00:50:53.608: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 10 00:50:56.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617453, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617453, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617453, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617453, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:50:58.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617453, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617453, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617453, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617453, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:51:01.046: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:51:01.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:51:02.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1771" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.613 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":267,"skipped":4436,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:51:02.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 10 00:51:07.109: INFO: Successfully updated pod "annotationupdate0bbed91c-2588-4e8d-80a0-376dc9f822c5" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:51:11.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1920" for this suite. • [SLOW TEST:8.892 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":268,"skipped":4441,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:51:11.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 00:51:11.349: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ebf6c5cc-cd7e-4160-b497-cd8538925947" in namespace "projected-3863" to be "Succeeded or Failed" Aug 10 00:51:11.354: INFO: Pod "downwardapi-volume-ebf6c5cc-cd7e-4160-b497-cd8538925947": Phase="Pending", Reason="", readiness=false. Elapsed: 4.689932ms Aug 10 00:51:13.358: INFO: Pod "downwardapi-volume-ebf6c5cc-cd7e-4160-b497-cd8538925947": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0083966s Aug 10 00:51:15.362: INFO: Pod "downwardapi-volume-ebf6c5cc-cd7e-4160-b497-cd8538925947": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012669686s STEP: Saw pod success Aug 10 00:51:15.362: INFO: Pod "downwardapi-volume-ebf6c5cc-cd7e-4160-b497-cd8538925947" satisfied condition "Succeeded or Failed" Aug 10 00:51:15.365: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ebf6c5cc-cd7e-4160-b497-cd8538925947 container client-container: STEP: delete the pod Aug 10 00:51:15.405: INFO: Waiting for pod downwardapi-volume-ebf6c5cc-cd7e-4160-b497-cd8538925947 to disappear Aug 10 00:51:15.422: INFO: Pod downwardapi-volume-ebf6c5cc-cd7e-4160-b497-cd8538925947 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:51:15.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3863" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":269,"skipped":4471,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:51:15.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:51:15.557: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:51:16.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3022" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":270,"skipped":4473,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:51:16.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 00:51:16.268: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a37f5d5-41dd-45a2-8a3c-7cac540a25f7" in namespace "projected-8400" to be "Succeeded or Failed" Aug 10 00:51:16.290: INFO: Pod "downwardapi-volume-9a37f5d5-41dd-45a2-8a3c-7cac540a25f7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.480717ms Aug 10 00:51:18.293: INFO: Pod "downwardapi-volume-9a37f5d5-41dd-45a2-8a3c-7cac540a25f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025535786s Aug 10 00:51:20.314: INFO: Pod "downwardapi-volume-9a37f5d5-41dd-45a2-8a3c-7cac540a25f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04625255s STEP: Saw pod success Aug 10 00:51:20.314: INFO: Pod "downwardapi-volume-9a37f5d5-41dd-45a2-8a3c-7cac540a25f7" satisfied condition "Succeeded or Failed" Aug 10 00:51:20.317: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-9a37f5d5-41dd-45a2-8a3c-7cac540a25f7 container client-container: STEP: delete the pod Aug 10 00:51:20.347: INFO: Waiting for pod downwardapi-volume-9a37f5d5-41dd-45a2-8a3c-7cac540a25f7 to disappear Aug 10 00:51:20.355: INFO: Pod downwardapi-volume-9a37f5d5-41dd-45a2-8a3c-7cac540a25f7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:51:20.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8400" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":271,"skipped":4488,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:51:20.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-3b276c36-c33a-465e-8171-21f5f28abb7c in namespace container-probe-9460 Aug 10 00:51:24.459: INFO: Started pod liveness-3b276c36-c33a-465e-8171-21f5f28abb7c in namespace container-probe-9460 STEP: checking the pod's current state and verifying that restartCount is present Aug 10 00:51:24.463: INFO: Initial restart count of pod liveness-3b276c36-c33a-465e-8171-21f5f28abb7c is 0 Aug 10 00:51:42.504: INFO: Restart count of pod container-probe-9460/liveness-3b276c36-c33a-465e-8171-21f5f28abb7c is now 1 (18.041339418s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:51:42.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9460" for this suite. • [SLOW TEST:22.184 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":272,"skipped":4515,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:51:42.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Aug 10 00:51:47.349: INFO: Successfully updated pod "adopt-release-6kph6" STEP: Checking that the Job readopts the Pod Aug 10 00:51:47.349: INFO: Waiting up to 15m0s for pod "adopt-release-6kph6" in namespace "job-5177" to be "adopted" Aug 10 00:51:47.474: INFO: Pod "adopt-release-6kph6": Phase="Running", Reason="", readiness=true. Elapsed: 125.082265ms Aug 10 00:51:49.478: INFO: Pod "adopt-release-6kph6": Phase="Running", Reason="", readiness=true. Elapsed: 2.128921662s Aug 10 00:51:49.478: INFO: Pod "adopt-release-6kph6" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Aug 10 00:51:49.993: INFO: Successfully updated pod "adopt-release-6kph6" STEP: Checking that the Job releases the Pod Aug 10 00:51:49.993: INFO: Waiting up to 15m0s for pod "adopt-release-6kph6" in namespace "job-5177" to be "released" Aug 10 00:51:50.015: INFO: Pod "adopt-release-6kph6": Phase="Running", Reason="", readiness=true. Elapsed: 21.706688ms Aug 10 00:51:52.022: INFO: Pod "adopt-release-6kph6": Phase="Running", Reason="", readiness=true. Elapsed: 2.028902183s Aug 10 00:51:52.022: INFO: Pod "adopt-release-6kph6" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:51:52.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5177" for this suite. • [SLOW TEST:9.500 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":273,"skipped":4530,"failed":0} SSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:51:52.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-8787 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8787 to expose endpoints map[] Aug 10 00:51:52.624: INFO: successfully validated that service endpoint-test2 in namespace services-8787 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-8787 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8787 to expose endpoints map[pod1:[80]] Aug 10 00:51:56.668: INFO: successfully validated that service endpoint-test2 in namespace services-8787 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-8787 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8787 to expose endpoints map[pod1:[80] pod2:[80]] Aug 10 00:52:00.724: INFO: successfully validated that service endpoint-test2 in namespace services-8787 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-8787 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8787 to expose endpoints map[pod2:[80]] Aug 10 00:52:00.799: INFO: successfully validated that service endpoint-test2 in namespace services-8787 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-8787 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8787 to expose endpoints map[] Aug 10 00:52:01.883: INFO: successfully validated that service endpoint-test2 in namespace services-8787 exposes endpoints map[] [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:52:01.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8787" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:9.870 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":274,"skipped":4536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:52:01.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:52:03.154: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:52:05.274: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617523, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617523, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617523, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617523, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:52:08.322: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:52:08.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6198" for this suite. STEP: Destroying namespace "webhook-6198-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.656 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":275,"skipped":4564,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:52:08.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-1987 STEP: creating service affinity-clusterip in namespace services-1987 STEP: creating replication controller affinity-clusterip in namespace services-1987 I0810 00:52:08.719519 8 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-1987, replica count: 3 I0810 00:52:11.769867 8 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0810 00:52:14.770093 8 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 10 00:52:14.776: INFO: Creating new exec pod Aug 10 00:52:19.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1987 execpod-affinityt8zql -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Aug 10 00:52:20.067: INFO: stderr: "I0810 00:52:19.957668 3411 log.go:181] (0xc000cbcd10) (0xc000d24500) Create stream\nI0810 00:52:19.957733 3411 log.go:181] (0xc000cbcd10) (0xc000d24500) Stream added, broadcasting: 1\nI0810 00:52:19.962187 3411 log.go:181] (0xc000cbcd10) Reply frame received for 1\nI0810 00:52:19.962226 3411 log.go:181] (0xc000cbcd10) (0xc000300780) Create stream\nI0810 00:52:19.962252 3411 log.go:181] (0xc000cbcd10) (0xc000300780) Stream added, broadcasting: 3\nI0810 00:52:19.963063 3411 log.go:181] (0xc000cbcd10) Reply frame received for 3\nI0810 00:52:19.963096 3411 log.go:181] (0xc000cbcd10) (0xc000891040) Create stream\nI0810 00:52:19.963106 3411 log.go:181] (0xc000cbcd10) (0xc000891040) Stream added, broadcasting: 5\nI0810 00:52:19.963891 3411 log.go:181] (0xc000cbcd10) Reply frame received for 5\nI0810 00:52:20.059360 3411 log.go:181] (0xc000cbcd10) Data frame received for 5\nI0810 00:52:20.059381 3411 log.go:181] (0xc000891040) (5) Data frame handling\nI0810 00:52:20.059392 3411 log.go:181] (0xc000891040) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI0810 00:52:20.059497 3411 log.go:181] (0xc000cbcd10) Data frame received for 5\nI0810 00:52:20.059508 3411 log.go:181] (0xc000891040) (5) Data frame handling\nI0810 00:52:20.059514 3411 log.go:181] (0xc000891040) (5) Data frame sent\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI0810 00:52:20.059693 3411 log.go:181] (0xc000cbcd10) Data frame received for 3\nI0810 00:52:20.059708 3411 log.go:181] (0xc000300780) (3) Data frame handling\nI0810 00:52:20.060306 3411 log.go:181] (0xc000cbcd10) Data frame received for 5\nI0810 00:52:20.060319 3411 log.go:181] (0xc000891040) (5) Data frame handling\nI0810 00:52:20.061740 3411 log.go:181] (0xc000cbcd10) Data frame received for 1\nI0810 00:52:20.061764 3411 log.go:181] (0xc000d24500) (1) Data frame handling\nI0810 00:52:20.061778 3411 log.go:181] (0xc000d24500) (1) Data frame sent\nI0810 00:52:20.061795 3411 log.go:181] (0xc000cbcd10) (0xc000d24500) Stream removed, broadcasting: 1\nI0810 00:52:20.061820 3411 log.go:181] (0xc000cbcd10) Go away received\nI0810 00:52:20.062131 3411 log.go:181] (0xc000cbcd10) (0xc000d24500) Stream removed, broadcasting: 1\nI0810 00:52:20.062146 3411 log.go:181] (0xc000cbcd10) (0xc000300780) Stream removed, broadcasting: 3\nI0810 00:52:20.062152 3411 log.go:181] (0xc000cbcd10) (0xc000891040) Stream removed, broadcasting: 5\n" Aug 10 00:52:20.067: INFO: stdout: "" Aug 10 00:52:20.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1987 execpod-affinityt8zql -- /bin/sh -x -c nc -zv -t -w 2 10.98.4.198 80' Aug 10 00:52:20.258: INFO: stderr: "I0810 00:52:20.189414 3429 log.go:181] (0xc000f013f0) (0xc0004f2500) Create stream\nI0810 00:52:20.189458 3429 log.go:181] (0xc000f013f0) (0xc0004f2500) Stream added, broadcasting: 1\nI0810 00:52:20.193081 3429 log.go:181] (0xc000f013f0) Reply frame received for 1\nI0810 00:52:20.193141 3429 log.go:181] (0xc000f013f0) (0xc000abf220) Create stream\nI0810 00:52:20.193169 3429 log.go:181] (0xc000f013f0) (0xc000abf220) Stream added, broadcasting: 3\nI0810 00:52:20.193936 3429 log.go:181] (0xc000f013f0) Reply frame received for 3\nI0810 00:52:20.193961 3429 log.go:181] (0xc000f013f0) (0xc00090c320) Create stream\nI0810 00:52:20.193969 3429 log.go:181] (0xc000f013f0) (0xc00090c320) Stream added, broadcasting: 5\nI0810 00:52:20.194784 3429 log.go:181] (0xc000f013f0) Reply frame received for 5\nI0810 00:52:20.252582 3429 log.go:181] (0xc000f013f0) Data frame received for 3\nI0810 00:52:20.252619 3429 log.go:181] (0xc000abf220) (3) Data frame handling\nI0810 00:52:20.252643 3429 log.go:181] (0xc000f013f0) Data frame received for 5\nI0810 00:52:20.252652 3429 log.go:181] (0xc00090c320) (5) Data frame handling\nI0810 00:52:20.252662 3429 log.go:181] (0xc00090c320) (5) Data frame sent\nI0810 00:52:20.252669 3429 log.go:181] (0xc000f013f0) Data frame received for 5\nI0810 00:52:20.252675 3429 log.go:181] (0xc00090c320) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.4.198 80\nConnection to 10.98.4.198 80 port [tcp/http] succeeded!\nI0810 00:52:20.253882 3429 log.go:181] (0xc000f013f0) Data frame received for 1\nI0810 00:52:20.253901 3429 log.go:181] (0xc0004f2500) (1) Data frame handling\nI0810 00:52:20.253912 3429 log.go:181] (0xc0004f2500) (1) Data frame sent\nI0810 00:52:20.253920 3429 log.go:181] (0xc000f013f0) (0xc0004f2500) Stream removed, broadcasting: 1\nI0810 00:52:20.253929 3429 log.go:181] (0xc000f013f0) Go away received\nI0810 00:52:20.254235 3429 log.go:181] (0xc000f013f0) (0xc0004f2500) Stream removed, broadcasting: 1\nI0810 00:52:20.254249 3429 log.go:181] (0xc000f013f0) (0xc000abf220) Stream removed, broadcasting: 3\nI0810 00:52:20.254254 3429 log.go:181] (0xc000f013f0) (0xc00090c320) Stream removed, broadcasting: 5\n" Aug 10 00:52:20.258: INFO: stdout: "" Aug 10 00:52:20.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config exec --namespace=services-1987 execpod-affinityt8zql -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.4.198:80/ ; done' Aug 10 00:52:20.582: INFO: stderr: "I0810 00:52:20.387594 3447 log.go:181] (0xc0007bcfd0) (0xc00028a780) Create stream\nI0810 00:52:20.387637 3447 log.go:181] (0xc0007bcfd0) (0xc00028a780) Stream added, broadcasting: 1\nI0810 00:52:20.391797 3447 log.go:181] (0xc0007bcfd0) Reply frame received for 1\nI0810 00:52:20.391851 3447 log.go:181] (0xc0007bcfd0) (0xc000ea2320) Create stream\nI0810 00:52:20.391873 3447 log.go:181] (0xc0007bcfd0) (0xc000ea2320) Stream added, broadcasting: 3\nI0810 00:52:20.394160 3447 log.go:181] (0xc0007bcfd0) Reply frame received for 3\nI0810 00:52:20.394211 3447 log.go:181] (0xc0007bcfd0) (0xc00047a5a0) Create stream\nI0810 00:52:20.394226 3447 log.go:181] (0xc0007bcfd0) (0xc00047a5a0) Stream added, broadcasting: 5\nI0810 00:52:20.395130 3447 log.go:181] (0xc0007bcfd0) Reply frame received for 5\nI0810 00:52:20.474379 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.474414 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.474429 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.474451 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.474469 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.474497 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.479810 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.479836 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.479855 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.480482 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.480515 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.480551 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.480642 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.480657 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.480665 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.485111 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.485140 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.485163 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.485545 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.485558 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.485566 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.485638 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.485652 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.485663 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.492373 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.492387 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.492394 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.493413 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.493430 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.493453 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.493485 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.493515 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\nI0810 00:52:20.493556 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.498522 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.498544 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.498566 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.498988 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.499003 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.499011 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.499035 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.499060 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.499091 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.504038 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.504052 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.504063 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.505031 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.505053 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.505069 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n+ echo\n+ curl -q -sI0810 00:52:20.505152 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.505183 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.505202 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.505224 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.505245 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.505273 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.510540 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.510565 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.510595 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.511560 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.511606 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.511637 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.511658 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.511681 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.511709 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.517139 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.517178 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.517207 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.517612 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.517632 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.517649 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.517732 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.517750 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.517773 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.523166 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.523197 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.523245 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.524067 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.524086 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.524103 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.524127 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.524139 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.524151 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.530760 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.530794 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.530819 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.531339 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.531370 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.531387 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\nI0810 00:52:20.531400 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.531413 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.531440 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.531473 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.531480 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.531501 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\nI0810 00:52:20.538761 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.538781 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.538799 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.539469 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.539490 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.539526 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\nI0810 00:52:20.539556 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.539576 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.539611 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.542929 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.542941 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.542948 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.543285 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.543295 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.543301 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I0810 00:52:20.543392 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.543412 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\n http://10.98.4.198:80/\nI0810 00:52:20.543449 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.543464 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.543470 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.543480 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\nI0810 00:52:20.549399 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.549427 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.549451 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.550154 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.550175 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.550190 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.550203 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.550222 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.550239 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\nI0810 00:52:20.550351 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.550370 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.550395 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\nI0810 00:52:20.554408 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.554440 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.554479 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.554787 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.554806 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.554813 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.554835 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.554848 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.554865 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.561286 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.561301 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.561309 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.561687 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.561716 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.561736 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n+ echo\n+ curl -q -sI0810 00:52:20.561768 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.561787 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.561818 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.561843 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.561874 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.561908 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.566738 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.566769 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.566801 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.567382 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.567408 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.567445 3447 log.go:181] (0xc00047a5a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.4.198:80/\nI0810 00:52:20.567697 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.567719 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.567736 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.573838 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.573855 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.573874 3447 log.go:181] (0xc000ea2320) (3) Data frame sent\nI0810 00:52:20.574626 3447 log.go:181] (0xc0007bcfd0) Data frame received for 5\nI0810 00:52:20.574643 3447 log.go:181] (0xc00047a5a0) (5) Data frame handling\nI0810 00:52:20.574708 3447 log.go:181] (0xc0007bcfd0) Data frame received for 3\nI0810 00:52:20.574729 3447 log.go:181] (0xc000ea2320) (3) Data frame handling\nI0810 00:52:20.576560 3447 log.go:181] (0xc0007bcfd0) Data frame received for 1\nI0810 00:52:20.576582 3447 log.go:181] (0xc00028a780) (1) Data frame handling\nI0810 00:52:20.576604 3447 log.go:181] (0xc00028a780) (1) Data frame sent\nI0810 00:52:20.576620 3447 log.go:181] (0xc0007bcfd0) (0xc00028a780) Stream removed, broadcasting: 1\nI0810 00:52:20.576697 3447 log.go:181] (0xc0007bcfd0) Go away received\nI0810 00:52:20.577114 3447 log.go:181] (0xc0007bcfd0) (0xc00028a780) Stream removed, broadcasting: 1\nI0810 00:52:20.577133 3447 log.go:181] (0xc0007bcfd0) (0xc000ea2320) Stream removed, broadcasting: 3\nI0810 00:52:20.577140 3447 log.go:181] (0xc0007bcfd0) (0xc00047a5a0) Stream removed, broadcasting: 5\n" Aug 10 00:52:20.582: INFO: stdout: "\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4\naffinity-clusterip-4snh4" Aug 10 00:52:20.582: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.582: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.582: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.582: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.582: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.582: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.582: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.582: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.582: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.582: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.582: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.582: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.582: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.583: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.583: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.583: INFO: Received response from host: affinity-clusterip-4snh4 Aug 10 00:52:20.583: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-1987, will wait for the garbage collector to delete the pods Aug 10 00:52:20.717: INFO: Deleting ReplicationController affinity-clusterip took: 6.847121ms Aug 10 00:52:21.217: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.24839ms [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:52:33.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1987" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:24.783 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":276,"skipped":4581,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:52:33.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-12a78ce9-b8bc-4e4e-b669-66f209b40c5b STEP: Creating a pod to test consume secrets Aug 10 00:52:33.487: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f6a8cc56-d031-4d89-897a-31556ba70672" in namespace "projected-8802" to be "Succeeded or Failed" Aug 10 00:52:33.495: INFO: Pod "pod-projected-secrets-f6a8cc56-d031-4d89-897a-31556ba70672": Phase="Pending", Reason="", readiness=false. Elapsed: 7.583894ms Aug 10 00:52:35.537: INFO: Pod "pod-projected-secrets-f6a8cc56-d031-4d89-897a-31556ba70672": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049172882s Aug 10 00:52:37.540: INFO: Pod "pod-projected-secrets-f6a8cc56-d031-4d89-897a-31556ba70672": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052907001s STEP: Saw pod success Aug 10 00:52:37.540: INFO: Pod "pod-projected-secrets-f6a8cc56-d031-4d89-897a-31556ba70672" satisfied condition "Succeeded or Failed" Aug 10 00:52:37.544: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-f6a8cc56-d031-4d89-897a-31556ba70672 container projected-secret-volume-test: STEP: delete the pod Aug 10 00:52:37.579: INFO: Waiting for pod pod-projected-secrets-f6a8cc56-d031-4d89-897a-31556ba70672 to disappear Aug 10 00:52:37.599: INFO: Pod pod-projected-secrets-f6a8cc56-d031-4d89-897a-31556ba70672 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:52:37.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8802" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":277,"skipped":4610,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:52:37.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:52:38.160: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:52:40.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617558, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617558, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617558, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617558, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:52:43.208: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:52:43.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:52:44.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3110" for this suite. STEP: Destroying namespace "webhook-3110-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.830 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":278,"skipped":4627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:52:44.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:52:45.783: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:52:47.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617565, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617565, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617566, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617565, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:52:50.821: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:53:01.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3516" for this suite. STEP: Destroying namespace "webhook-3516-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.704 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":279,"skipped":4658,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:53:01.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:53:02.106: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:53:04.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617582, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617582, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617582, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617582, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:53:07.244: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:53:07.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3616" for this suite. STEP: Destroying namespace "webhook-3616-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.278 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":280,"skipped":4668,"failed":0} S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:53:07.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 10 00:53:17.622: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-534 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:53:17.622: INFO: >>> kubeConfig: /root/.kube/config I0810 00:53:17.653474 8 log.go:181] (0xc0072c2a50) (0xc003468f00) Create stream I0810 00:53:17.653499 8 log.go:181] (0xc0072c2a50) (0xc003468f00) Stream added, broadcasting: 1 I0810 00:53:17.662173 8 log.go:181] (0xc0072c2a50) Reply frame received for 1 I0810 00:53:17.662260 8 log.go:181] (0xc0072c2a50) (0xc003468fa0) Create stream I0810 00:53:17.662279 8 log.go:181] (0xc0072c2a50) (0xc003468fa0) Stream added, broadcasting: 3 I0810 00:53:17.663366 8 log.go:181] (0xc0072c2a50) Reply frame received for 3 I0810 00:53:17.663401 8 log.go:181] (0xc0072c2a50) (0xc003468000) Create stream I0810 00:53:17.663416 8 log.go:181] (0xc0072c2a50) (0xc003468000) Stream added, broadcasting: 5 I0810 00:53:17.664066 8 log.go:181] (0xc0072c2a50) Reply frame received for 5 I0810 00:53:17.724232 8 log.go:181] (0xc0072c2a50) Data frame received for 5 I0810 00:53:17.724269 8 log.go:181] (0xc003468000) (5) Data frame handling I0810 00:53:17.724290 8 log.go:181] (0xc0072c2a50) Data frame received for 3 I0810 00:53:17.724303 8 log.go:181] (0xc003468fa0) (3) Data frame handling I0810 00:53:17.724316 8 log.go:181] (0xc003468fa0) (3) Data frame sent I0810 00:53:17.724327 8 log.go:181] (0xc0072c2a50) Data frame received for 3 I0810 00:53:17.724337 8 log.go:181] (0xc003468fa0) (3) Data frame handling I0810 00:53:17.726289 8 log.go:181] (0xc0072c2a50) Data frame received for 1 I0810 00:53:17.726337 8 log.go:181] (0xc003468f00) (1) Data frame handling I0810 00:53:17.726368 8 log.go:181] (0xc003468f00) (1) Data frame sent I0810 00:53:17.726415 8 log.go:181] (0xc0072c2a50) (0xc003468f00) Stream removed, broadcasting: 1 I0810 00:53:17.726471 8 log.go:181] (0xc0072c2a50) Go away received I0810 00:53:17.726544 8 log.go:181] (0xc0072c2a50) (0xc003468f00) Stream removed, broadcasting: 1 I0810 00:53:17.726576 8 log.go:181] (0xc0072c2a50) (0xc003468fa0) Stream removed, broadcasting: 3 I0810 00:53:17.726593 8 log.go:181] (0xc0072c2a50) (0xc003468000) Stream removed, broadcasting: 5 Aug 10 00:53:17.726: INFO: Exec stderr: "" Aug 10 00:53:17.726: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-534 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:53:17.726: INFO: >>> kubeConfig: /root/.kube/config I0810 00:53:17.758803 8 log.go:181] (0xc0072c22c0) (0xc0002d75e0) Create stream I0810 00:53:17.758837 8 log.go:181] (0xc0072c22c0) (0xc0002d75e0) Stream added, broadcasting: 1 I0810 00:53:17.761048 8 log.go:181] (0xc0072c22c0) Reply frame received for 1 I0810 00:53:17.761088 8 log.go:181] (0xc0072c22c0) (0xc00076e1e0) Create stream I0810 00:53:17.761108 8 log.go:181] (0xc0072c22c0) (0xc00076e1e0) Stream added, broadcasting: 3 I0810 00:53:17.762063 8 log.go:181] (0xc0072c22c0) Reply frame received for 3 I0810 00:53:17.762143 8 log.go:181] (0xc0072c22c0) (0xc0002d7f40) Create stream I0810 00:53:17.762201 8 log.go:181] (0xc0072c22c0) (0xc0002d7f40) Stream added, broadcasting: 5 I0810 00:53:17.763496 8 log.go:181] (0xc0072c22c0) Reply frame received for 5 I0810 00:53:17.842239 8 log.go:181] (0xc0072c22c0) Data frame received for 3 I0810 00:53:17.842270 8 log.go:181] (0xc00076e1e0) (3) Data frame handling I0810 00:53:17.842282 8 log.go:181] (0xc00076e1e0) (3) Data frame sent I0810 00:53:17.842291 8 log.go:181] (0xc0072c22c0) Data frame received for 3 I0810 00:53:17.842304 8 log.go:181] (0xc00076e1e0) (3) Data frame handling I0810 00:53:17.842340 8 log.go:181] (0xc0072c22c0) Data frame received for 5 I0810 00:53:17.842360 8 log.go:181] (0xc0002d7f40) (5) Data frame handling I0810 00:53:17.843596 8 log.go:181] (0xc0072c22c0) Data frame received for 1 I0810 00:53:17.843624 8 log.go:181] (0xc0002d75e0) (1) Data frame handling I0810 00:53:17.843651 8 log.go:181] (0xc0002d75e0) (1) Data frame sent I0810 00:53:17.843675 8 log.go:181] (0xc0072c22c0) (0xc0002d75e0) Stream removed, broadcasting: 1 I0810 00:53:17.843690 8 log.go:181] (0xc0072c22c0) Go away received I0810 00:53:17.843853 8 log.go:181] (0xc0072c22c0) (0xc0002d75e0) Stream removed, broadcasting: 1 I0810 00:53:17.843881 8 log.go:181] (0xc0072c22c0) (0xc00076e1e0) Stream removed, broadcasting: 3 I0810 00:53:17.843906 8 log.go:181] (0xc0072c22c0) (0xc0002d7f40) Stream removed, broadcasting: 5 Aug 10 00:53:17.843: INFO: Exec stderr: "" Aug 10 00:53:17.843: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-534 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:53:17.843: INFO: >>> kubeConfig: /root/.kube/config I0810 00:53:17.881593 8 log.go:181] (0xc000042fd0) (0xc003dbc000) Create stream I0810 00:53:17.881633 8 log.go:181] (0xc000042fd0) (0xc003dbc000) Stream added, broadcasting: 1 I0810 00:53:17.883954 8 log.go:181] (0xc000042fd0) Reply frame received for 1 I0810 00:53:17.884001 8 log.go:181] (0xc000042fd0) (0xc00076e320) Create stream I0810 00:53:17.884024 8 log.go:181] (0xc000042fd0) (0xc00076e320) Stream added, broadcasting: 3 I0810 00:53:17.885315 8 log.go:181] (0xc000042fd0) Reply frame received for 3 I0810 00:53:17.885360 8 log.go:181] (0xc000042fd0) (0xc003468280) Create stream I0810 00:53:17.885385 8 log.go:181] (0xc000042fd0) (0xc003468280) Stream added, broadcasting: 5 I0810 00:53:17.886676 8 log.go:181] (0xc000042fd0) Reply frame received for 5 I0810 00:53:17.962818 8 log.go:181] (0xc000042fd0) Data frame received for 3 I0810 00:53:17.962892 8 log.go:181] (0xc00076e320) (3) Data frame handling I0810 00:53:17.962931 8 log.go:181] (0xc00076e320) (3) Data frame sent I0810 00:53:17.963325 8 log.go:181] (0xc000042fd0) Data frame received for 5 I0810 00:53:17.963353 8 log.go:181] (0xc003468280) (5) Data frame handling I0810 00:53:17.964049 8 log.go:181] (0xc000042fd0) Data frame received for 3 I0810 00:53:17.964073 8 log.go:181] (0xc00076e320) (3) Data frame handling I0810 00:53:17.969522 8 log.go:181] (0xc000042fd0) Data frame received for 1 I0810 00:53:17.969545 8 log.go:181] (0xc003dbc000) (1) Data frame handling I0810 00:53:17.969562 8 log.go:181] (0xc003dbc000) (1) Data frame sent I0810 00:53:17.969598 8 log.go:181] (0xc000042fd0) (0xc003dbc000) Stream removed, broadcasting: 1 I0810 00:53:17.969625 8 log.go:181] (0xc000042fd0) Go away received I0810 00:53:17.969716 8 log.go:181] (0xc000042fd0) (0xc003dbc000) Stream removed, broadcasting: 1 I0810 00:53:17.969740 8 log.go:181] (0xc000042fd0) (0xc00076e320) Stream removed, broadcasting: 3 I0810 00:53:17.969747 8 log.go:181] (0xc000042fd0) (0xc003468280) Stream removed, broadcasting: 5 Aug 10 00:53:17.969: INFO: Exec stderr: "" Aug 10 00:53:17.969: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-534 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:53:17.969: INFO: >>> kubeConfig: /root/.kube/config I0810 00:53:17.997519 8 log.go:181] (0xc0072c29a0) (0xc002da2460) Create stream I0810 00:53:17.997548 8 log.go:181] (0xc0072c29a0) (0xc002da2460) Stream added, broadcasting: 1 I0810 00:53:17.999383 8 log.go:181] (0xc0072c29a0) Reply frame received for 1 I0810 00:53:17.999417 8 log.go:181] (0xc0072c29a0) (0xc003468320) Create stream I0810 00:53:17.999430 8 log.go:181] (0xc0072c29a0) (0xc003468320) Stream added, broadcasting: 3 I0810 00:53:18.000335 8 log.go:181] (0xc0072c29a0) Reply frame received for 3 I0810 00:53:18.000367 8 log.go:181] (0xc0072c29a0) (0xc003dbc0a0) Create stream I0810 00:53:18.000379 8 log.go:181] (0xc0072c29a0) (0xc003dbc0a0) Stream added, broadcasting: 5 I0810 00:53:18.001516 8 log.go:181] (0xc0072c29a0) Reply frame received for 5 I0810 00:53:18.074471 8 log.go:181] (0xc0072c29a0) Data frame received for 5 I0810 00:53:18.074498 8 log.go:181] (0xc003dbc0a0) (5) Data frame handling I0810 00:53:18.074517 8 log.go:181] (0xc0072c29a0) Data frame received for 3 I0810 00:53:18.074525 8 log.go:181] (0xc003468320) (3) Data frame handling I0810 00:53:18.074534 8 log.go:181] (0xc003468320) (3) Data frame sent I0810 00:53:18.074540 8 log.go:181] (0xc0072c29a0) Data frame received for 3 I0810 00:53:18.074546 8 log.go:181] (0xc003468320) (3) Data frame handling I0810 00:53:18.075387 8 log.go:181] (0xc0072c29a0) Data frame received for 1 I0810 00:53:18.075406 8 log.go:181] (0xc002da2460) (1) Data frame handling I0810 00:53:18.075413 8 log.go:181] (0xc002da2460) (1) Data frame sent I0810 00:53:18.075510 8 log.go:181] (0xc0072c29a0) (0xc002da2460) Stream removed, broadcasting: 1 I0810 00:53:18.075546 8 log.go:181] (0xc0072c29a0) Go away received I0810 00:53:18.075622 8 log.go:181] (0xc0072c29a0) (0xc002da2460) Stream removed, broadcasting: 1 I0810 00:53:18.075635 8 log.go:181] (0xc0072c29a0) (0xc003468320) Stream removed, broadcasting: 3 I0810 00:53:18.075641 8 log.go:181] (0xc0072c29a0) (0xc003dbc0a0) Stream removed, broadcasting: 5 Aug 10 00:53:18.075: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 10 00:53:18.075: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-534 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:53:18.075: INFO: >>> kubeConfig: /root/.kube/config I0810 00:53:18.097070 8 log.go:181] (0xc0012b48f0) (0xc0016925a0) Create stream I0810 00:53:18.097100 8 log.go:181] (0xc0012b48f0) (0xc0016925a0) Stream added, broadcasting: 1 I0810 00:53:18.098692 8 log.go:181] (0xc0012b48f0) Reply frame received for 1 I0810 00:53:18.098723 8 log.go:181] (0xc0012b48f0) (0xc002da2500) Create stream I0810 00:53:18.098731 8 log.go:181] (0xc0012b48f0) (0xc002da2500) Stream added, broadcasting: 3 I0810 00:53:18.099610 8 log.go:181] (0xc0012b48f0) Reply frame received for 3 I0810 00:53:18.099657 8 log.go:181] (0xc0012b48f0) (0xc0034683c0) Create stream I0810 00:53:18.099674 8 log.go:181] (0xc0012b48f0) (0xc0034683c0) Stream added, broadcasting: 5 I0810 00:53:18.100469 8 log.go:181] (0xc0012b48f0) Reply frame received for 5 I0810 00:53:18.161000 8 log.go:181] (0xc0012b48f0) Data frame received for 3 I0810 00:53:18.161053 8 log.go:181] (0xc002da2500) (3) Data frame handling I0810 00:53:18.161069 8 log.go:181] (0xc002da2500) (3) Data frame sent I0810 00:53:18.161082 8 log.go:181] (0xc0012b48f0) Data frame received for 3 I0810 00:53:18.161091 8 log.go:181] (0xc002da2500) (3) Data frame handling I0810 00:53:18.161125 8 log.go:181] (0xc0012b48f0) Data frame received for 5 I0810 00:53:18.161160 8 log.go:181] (0xc0034683c0) (5) Data frame handling I0810 00:53:18.163112 8 log.go:181] (0xc0012b48f0) Data frame received for 1 I0810 00:53:18.163131 8 log.go:181] (0xc0016925a0) (1) Data frame handling I0810 00:53:18.163149 8 log.go:181] (0xc0016925a0) (1) Data frame sent I0810 00:53:18.163172 8 log.go:181] (0xc0012b48f0) (0xc0016925a0) Stream removed, broadcasting: 1 I0810 00:53:18.163207 8 log.go:181] (0xc0012b48f0) Go away received I0810 00:53:18.163274 8 log.go:181] (0xc0012b48f0) (0xc0016925a0) Stream removed, broadcasting: 1 I0810 00:53:18.163295 8 log.go:181] (0xc0012b48f0) (0xc002da2500) Stream removed, broadcasting: 3 I0810 00:53:18.163307 8 log.go:181] (0xc0012b48f0) (0xc0034683c0) Stream removed, broadcasting: 5 Aug 10 00:53:18.163: INFO: Exec stderr: "" Aug 10 00:53:18.163: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-534 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:53:18.163: INFO: >>> kubeConfig: /root/.kube/config I0810 00:53:18.189973 8 log.go:181] (0xc000043970) (0xc003dbc320) Create stream I0810 00:53:18.189998 8 log.go:181] (0xc000043970) (0xc003dbc320) Stream added, broadcasting: 1 I0810 00:53:18.191791 8 log.go:181] (0xc000043970) Reply frame received for 1 I0810 00:53:18.191834 8 log.go:181] (0xc000043970) (0xc002da2780) Create stream I0810 00:53:18.191857 8 log.go:181] (0xc000043970) (0xc002da2780) Stream added, broadcasting: 3 I0810 00:53:18.192909 8 log.go:181] (0xc000043970) Reply frame received for 3 I0810 00:53:18.192968 8 log.go:181] (0xc000043970) (0xc001692640) Create stream I0810 00:53:18.192986 8 log.go:181] (0xc000043970) (0xc001692640) Stream added, broadcasting: 5 I0810 00:53:18.193989 8 log.go:181] (0xc000043970) Reply frame received for 5 I0810 00:53:18.260626 8 log.go:181] (0xc000043970) Data frame received for 3 I0810 00:53:18.260673 8 log.go:181] (0xc002da2780) (3) Data frame handling I0810 00:53:18.260711 8 log.go:181] (0xc002da2780) (3) Data frame sent I0810 00:53:18.260969 8 log.go:181] (0xc000043970) Data frame received for 5 I0810 00:53:18.261004 8 log.go:181] (0xc000043970) Data frame received for 3 I0810 00:53:18.261045 8 log.go:181] (0xc002da2780) (3) Data frame handling I0810 00:53:18.261080 8 log.go:181] (0xc001692640) (5) Data frame handling I0810 00:53:18.262530 8 log.go:181] (0xc000043970) Data frame received for 1 I0810 00:53:18.262556 8 log.go:181] (0xc003dbc320) (1) Data frame handling I0810 00:53:18.262580 8 log.go:181] (0xc003dbc320) (1) Data frame sent I0810 00:53:18.262605 8 log.go:181] (0xc000043970) (0xc003dbc320) Stream removed, broadcasting: 1 I0810 00:53:18.262643 8 log.go:181] (0xc000043970) Go away received I0810 00:53:18.262707 8 log.go:181] (0xc000043970) (0xc003dbc320) Stream removed, broadcasting: 1 I0810 00:53:18.262728 8 log.go:181] (0xc000043970) (0xc002da2780) Stream removed, broadcasting: 3 I0810 00:53:18.262742 8 log.go:181] (0xc000043970) (0xc001692640) Stream removed, broadcasting: 5 Aug 10 00:53:18.262: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 10 00:53:18.262: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-534 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:53:18.262: INFO: >>> kubeConfig: /root/.kube/config I0810 00:53:18.289122 8 log.go:181] (0xc0012b4fd0) (0xc0016928c0) Create stream I0810 00:53:18.289143 8 log.go:181] (0xc0012b4fd0) (0xc0016928c0) Stream added, broadcasting: 1 I0810 00:53:18.291188 8 log.go:181] (0xc0012b4fd0) Reply frame received for 1 I0810 00:53:18.291234 8 log.go:181] (0xc0012b4fd0) (0xc001692960) Create stream I0810 00:53:18.291249 8 log.go:181] (0xc0012b4fd0) (0xc001692960) Stream added, broadcasting: 3 I0810 00:53:18.292292 8 log.go:181] (0xc0012b4fd0) Reply frame received for 3 I0810 00:53:18.292343 8 log.go:181] (0xc0012b4fd0) (0xc002da2820) Create stream I0810 00:53:18.292354 8 log.go:181] (0xc0012b4fd0) (0xc002da2820) Stream added, broadcasting: 5 I0810 00:53:18.293483 8 log.go:181] (0xc0012b4fd0) Reply frame received for 5 I0810 00:53:18.374257 8 log.go:181] (0xc0012b4fd0) Data frame received for 3 I0810 00:53:18.374278 8 log.go:181] (0xc001692960) (3) Data frame handling I0810 00:53:18.374286 8 log.go:181] (0xc001692960) (3) Data frame sent I0810 00:53:18.374291 8 log.go:181] (0xc0012b4fd0) Data frame received for 3 I0810 00:53:18.374295 8 log.go:181] (0xc001692960) (3) Data frame handling I0810 00:53:18.374384 8 log.go:181] (0xc0012b4fd0) Data frame received for 5 I0810 00:53:18.374408 8 log.go:181] (0xc002da2820) (5) Data frame handling I0810 00:53:18.376302 8 log.go:181] (0xc0012b4fd0) Data frame received for 1 I0810 00:53:18.376336 8 log.go:181] (0xc0016928c0) (1) Data frame handling I0810 00:53:18.376350 8 log.go:181] (0xc0016928c0) (1) Data frame sent I0810 00:53:18.376363 8 log.go:181] (0xc0012b4fd0) (0xc0016928c0) Stream removed, broadcasting: 1 I0810 00:53:18.376387 8 log.go:181] (0xc0012b4fd0) Go away received I0810 00:53:18.376525 8 log.go:181] (0xc0012b4fd0) (0xc0016928c0) Stream removed, broadcasting: 1 I0810 00:53:18.376543 8 log.go:181] (0xc0012b4fd0) (0xc001692960) Stream removed, broadcasting: 3 I0810 00:53:18.376553 8 log.go:181] (0xc0012b4fd0) (0xc002da2820) Stream removed, broadcasting: 5 Aug 10 00:53:18.376: INFO: Exec stderr: "" Aug 10 00:53:18.376: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-534 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:53:18.376: INFO: >>> kubeConfig: /root/.kube/config I0810 00:53:18.414653 8 log.go:181] (0xc0017980b0) (0xc003dbc5a0) Create stream I0810 00:53:18.414681 8 log.go:181] (0xc0017980b0) (0xc003dbc5a0) Stream added, broadcasting: 1 I0810 00:53:18.417113 8 log.go:181] (0xc0017980b0) Reply frame received for 1 I0810 00:53:18.417136 8 log.go:181] (0xc0017980b0) (0xc002da28c0) Create stream I0810 00:53:18.417142 8 log.go:181] (0xc0017980b0) (0xc002da28c0) Stream added, broadcasting: 3 I0810 00:53:18.418062 8 log.go:181] (0xc0017980b0) Reply frame received for 3 I0810 00:53:18.418116 8 log.go:181] (0xc0017980b0) (0xc0010661e0) Create stream I0810 00:53:18.418134 8 log.go:181] (0xc0017980b0) (0xc0010661e0) Stream added, broadcasting: 5 I0810 00:53:18.418946 8 log.go:181] (0xc0017980b0) Reply frame received for 5 I0810 00:53:18.473697 8 log.go:181] (0xc0017980b0) Data frame received for 3 I0810 00:53:18.473732 8 log.go:181] (0xc002da28c0) (3) Data frame handling I0810 00:53:18.473741 8 log.go:181] (0xc002da28c0) (3) Data frame sent I0810 00:53:18.473753 8 log.go:181] (0xc0017980b0) Data frame received for 3 I0810 00:53:18.473772 8 log.go:181] (0xc002da28c0) (3) Data frame handling I0810 00:53:18.473843 8 log.go:181] (0xc0017980b0) Data frame received for 5 I0810 00:53:18.473878 8 log.go:181] (0xc0010661e0) (5) Data frame handling I0810 00:53:18.475166 8 log.go:181] (0xc0017980b0) Data frame received for 1 I0810 00:53:18.475187 8 log.go:181] (0xc003dbc5a0) (1) Data frame handling I0810 00:53:18.475204 8 log.go:181] (0xc003dbc5a0) (1) Data frame sent I0810 00:53:18.475229 8 log.go:181] (0xc0017980b0) (0xc003dbc5a0) Stream removed, broadcasting: 1 I0810 00:53:18.475249 8 log.go:181] (0xc0017980b0) Go away received I0810 00:53:18.475383 8 log.go:181] (0xc0017980b0) (0xc003dbc5a0) Stream removed, broadcasting: 1 I0810 00:53:18.475410 8 log.go:181] (0xc0017980b0) (0xc002da28c0) Stream removed, broadcasting: 3 I0810 00:53:18.475426 8 log.go:181] (0xc0017980b0) (0xc0010661e0) Stream removed, broadcasting: 5 Aug 10 00:53:18.475: INFO: Exec stderr: "" Aug 10 00:53:18.475: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-534 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:53:18.475: INFO: >>> kubeConfig: /root/.kube/config I0810 00:53:18.517810 8 log.go:181] (0xc0012b5600) (0xc001692c80) Create stream I0810 00:53:18.517836 8 log.go:181] (0xc0012b5600) (0xc001692c80) Stream added, broadcasting: 1 I0810 00:53:18.521296 8 log.go:181] (0xc0012b5600) Reply frame received for 1 I0810 00:53:18.521360 8 log.go:181] (0xc0012b5600) (0xc003dbc6e0) Create stream I0810 00:53:18.521384 8 log.go:181] (0xc0012b5600) (0xc003dbc6e0) Stream added, broadcasting: 3 I0810 00:53:18.523191 8 log.go:181] (0xc0012b5600) Reply frame received for 3 I0810 00:53:18.523232 8 log.go:181] (0xc0012b5600) (0xc001692d20) Create stream I0810 00:53:18.523257 8 log.go:181] (0xc0012b5600) (0xc001692d20) Stream added, broadcasting: 5 I0810 00:53:18.524341 8 log.go:181] (0xc0012b5600) Reply frame received for 5 I0810 00:53:18.585458 8 log.go:181] (0xc0012b5600) Data frame received for 5 I0810 00:53:18.585496 8 log.go:181] (0xc001692d20) (5) Data frame handling I0810 00:53:18.585515 8 log.go:181] (0xc0012b5600) Data frame received for 3 I0810 00:53:18.585523 8 log.go:181] (0xc003dbc6e0) (3) Data frame handling I0810 00:53:18.585531 8 log.go:181] (0xc003dbc6e0) (3) Data frame sent I0810 00:53:18.585544 8 log.go:181] (0xc0012b5600) Data frame received for 3 I0810 00:53:18.585550 8 log.go:181] (0xc003dbc6e0) (3) Data frame handling I0810 00:53:18.587068 8 log.go:181] (0xc0012b5600) Data frame received for 1 I0810 00:53:18.587102 8 log.go:181] (0xc001692c80) (1) Data frame handling I0810 00:53:18.587134 8 log.go:181] (0xc001692c80) (1) Data frame sent I0810 00:53:18.587180 8 log.go:181] (0xc0012b5600) (0xc001692c80) Stream removed, broadcasting: 1 I0810 00:53:18.587218 8 log.go:181] (0xc0012b5600) Go away received I0810 00:53:18.587296 8 log.go:181] (0xc0012b5600) (0xc001692c80) Stream removed, broadcasting: 1 I0810 00:53:18.587312 8 log.go:181] (0xc0012b5600) (0xc003dbc6e0) Stream removed, broadcasting: 3 I0810 00:53:18.587320 8 log.go:181] (0xc0012b5600) (0xc001692d20) Stream removed, broadcasting: 5 Aug 10 00:53:18.587: INFO: Exec stderr: "" Aug 10 00:53:18.587: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-534 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:53:18.587: INFO: >>> kubeConfig: /root/.kube/config I0810 00:53:18.623604 8 log.go:181] (0xc0017988f0) (0xc003dbc960) Create stream I0810 00:53:18.623641 8 log.go:181] (0xc0017988f0) (0xc003dbc960) Stream added, broadcasting: 1 I0810 00:53:18.625238 8 log.go:181] (0xc0017988f0) Reply frame received for 1 I0810 00:53:18.625271 8 log.go:181] (0xc0017988f0) (0xc001066820) Create stream I0810 00:53:18.625281 8 log.go:181] (0xc0017988f0) (0xc001066820) Stream added, broadcasting: 3 I0810 00:53:18.626204 8 log.go:181] (0xc0017988f0) Reply frame received for 3 I0810 00:53:18.626244 8 log.go:181] (0xc0017988f0) (0xc002da2960) Create stream I0810 00:53:18.626263 8 log.go:181] (0xc0017988f0) (0xc002da2960) Stream added, broadcasting: 5 I0810 00:53:18.627023 8 log.go:181] (0xc0017988f0) Reply frame received for 5 I0810 00:53:18.694065 8 log.go:181] (0xc0017988f0) Data frame received for 5 I0810 00:53:18.694193 8 log.go:181] (0xc002da2960) (5) Data frame handling I0810 00:53:18.694357 8 log.go:181] (0xc0017988f0) Data frame received for 3 I0810 00:53:18.694402 8 log.go:181] (0xc001066820) (3) Data frame handling I0810 00:53:18.694453 8 log.go:181] (0xc001066820) (3) Data frame sent I0810 00:53:18.694474 8 log.go:181] (0xc0017988f0) Data frame received for 3 I0810 00:53:18.694487 8 log.go:181] (0xc001066820) (3) Data frame handling I0810 00:53:18.695651 8 log.go:181] (0xc0017988f0) Data frame received for 1 I0810 00:53:18.695725 8 log.go:181] (0xc003dbc960) (1) Data frame handling I0810 00:53:18.695771 8 log.go:181] (0xc003dbc960) (1) Data frame sent I0810 00:53:18.695815 8 log.go:181] (0xc0017988f0) (0xc003dbc960) Stream removed, broadcasting: 1 I0810 00:53:18.695845 8 log.go:181] (0xc0017988f0) Go away received I0810 00:53:18.695983 8 log.go:181] (0xc0017988f0) (0xc003dbc960) Stream removed, broadcasting: 1 I0810 00:53:18.696028 8 log.go:181] (0xc0017988f0) (0xc001066820) Stream removed, broadcasting: 3 I0810 00:53:18.696058 8 log.go:181] (0xc0017988f0) (0xc002da2960) Stream removed, broadcasting: 5 Aug 10 00:53:18.696: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:53:18.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-534" for this suite. • [SLOW TEST:11.285 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":281,"skipped":4669,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:53:18.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:53:19.371: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:53:21.381: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617599, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617599, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617599, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617599, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:53:23.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617599, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617599, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617599, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617599, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:53:26.419: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:53:26.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4075" for this suite. STEP: Destroying namespace "webhook-4075-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.872 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":282,"skipped":4695,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:53:26.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Aug 10 00:53:26.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4917' Aug 10 00:53:27.138: INFO: stderr: "" Aug 10 00:53:27.138: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Aug 10 00:53:27.424: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:42901 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4917' Aug 10 00:53:33.227: INFO: stderr: "" Aug 10 00:53:33.227: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:53:33.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4917" for this suite. • [SLOW TEST:6.690 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":283,"skipped":4696,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:53:33.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Aug 10 00:53:37.530: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-4390 PodName:var-expansion-46563f48-c981-4840-8e86-af25fd02f037 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:53:37.530: INFO: >>> kubeConfig: /root/.kube/config I0810 00:53:37.562637 8 log.go:181] (0xc000e1e8f0) (0xc003674c80) Create stream I0810 00:53:37.562685 8 log.go:181] (0xc000e1e8f0) (0xc003674c80) Stream added, broadcasting: 1 I0810 00:53:37.564564 8 log.go:181] (0xc000e1e8f0) Reply frame received for 1 I0810 00:53:37.564605 8 log.go:181] (0xc000e1e8f0) (0xc001692dc0) Create stream I0810 00:53:37.564619 8 log.go:181] (0xc000e1e8f0) (0xc001692dc0) Stream added, broadcasting: 3 I0810 00:53:37.565775 8 log.go:181] (0xc000e1e8f0) Reply frame received for 3 I0810 00:53:37.565811 8 log.go:181] (0xc000e1e8f0) (0xc003dbcd20) Create stream I0810 00:53:37.565827 8 log.go:181] (0xc000e1e8f0) (0xc003dbcd20) Stream added, broadcasting: 5 I0810 00:53:37.566703 8 log.go:181] (0xc000e1e8f0) Reply frame received for 5 I0810 00:53:37.638587 8 log.go:181] (0xc000e1e8f0) Data frame received for 3 I0810 00:53:37.638623 8 log.go:181] (0xc001692dc0) (3) Data frame handling I0810 00:53:37.638651 8 log.go:181] (0xc000e1e8f0) Data frame received for 5 I0810 00:53:37.638676 8 log.go:181] (0xc003dbcd20) (5) Data frame handling I0810 00:53:37.640183 8 log.go:181] (0xc000e1e8f0) Data frame received for 1 I0810 00:53:37.640202 8 log.go:181] (0xc003674c80) (1) Data frame handling I0810 00:53:37.640240 8 log.go:181] (0xc003674c80) (1) Data frame sent I0810 00:53:37.640258 8 log.go:181] (0xc000e1e8f0) (0xc003674c80) Stream removed, broadcasting: 1 I0810 00:53:37.640353 8 log.go:181] (0xc000e1e8f0) (0xc003674c80) Stream removed, broadcasting: 1 I0810 00:53:37.640372 8 log.go:181] (0xc000e1e8f0) (0xc001692dc0) Stream removed, broadcasting: 3 I0810 00:53:37.640385 8 log.go:181] (0xc000e1e8f0) (0xc003dbcd20) Stream removed, broadcasting: 5 STEP: test for file in mounted path I0810 00:53:37.640460 8 log.go:181] (0xc000e1e8f0) Go away received Aug 10 00:53:37.644: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-4390 PodName:var-expansion-46563f48-c981-4840-8e86-af25fd02f037 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 10 00:53:37.644: INFO: >>> kubeConfig: /root/.kube/config I0810 00:53:37.675028 8 log.go:181] (0xc0012b5c30) (0xc001693360) Create stream I0810 00:53:37.675052 8 log.go:181] (0xc0012b5c30) (0xc001693360) Stream added, broadcasting: 1 I0810 00:53:37.677494 8 log.go:181] (0xc0012b5c30) Reply frame received for 1 I0810 00:53:37.677535 8 log.go:181] (0xc0012b5c30) (0xc0016935e0) Create stream I0810 00:53:37.677549 8 log.go:181] (0xc0012b5c30) (0xc0016935e0) Stream added, broadcasting: 3 I0810 00:53:37.678513 8 log.go:181] (0xc0012b5c30) Reply frame received for 3 I0810 00:53:37.678535 8 log.go:181] (0xc0012b5c30) (0xc003cacdc0) Create stream I0810 00:53:37.678545 8 log.go:181] (0xc0012b5c30) (0xc003cacdc0) Stream added, broadcasting: 5 I0810 00:53:37.679422 8 log.go:181] (0xc0012b5c30) Reply frame received for 5 I0810 00:53:37.753515 8 log.go:181] (0xc0012b5c30) Data frame received for 5 I0810 00:53:37.753578 8 log.go:181] (0xc003cacdc0) (5) Data frame handling I0810 00:53:37.753619 8 log.go:181] (0xc0012b5c30) Data frame received for 3 I0810 00:53:37.753644 8 log.go:181] (0xc0016935e0) (3) Data frame handling I0810 00:53:37.754912 8 log.go:181] (0xc0012b5c30) Data frame received for 1 I0810 00:53:37.754953 8 log.go:181] (0xc001693360) (1) Data frame handling I0810 00:53:37.754976 8 log.go:181] (0xc001693360) (1) Data frame sent I0810 00:53:37.755001 8 log.go:181] (0xc0012b5c30) (0xc001693360) Stream removed, broadcasting: 1 I0810 00:53:37.755017 8 log.go:181] (0xc0012b5c30) Go away received I0810 00:53:37.755270 8 log.go:181] (0xc0012b5c30) (0xc001693360) Stream removed, broadcasting: 1 I0810 00:53:37.755294 8 log.go:181] (0xc0012b5c30) (0xc0016935e0) Stream removed, broadcasting: 3 I0810 00:53:37.755304 8 log.go:181] (0xc0012b5c30) (0xc003cacdc0) Stream removed, broadcasting: 5 STEP: updating the annotation value Aug 10 00:53:38.266: INFO: Successfully updated pod "var-expansion-46563f48-c981-4840-8e86-af25fd02f037" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Aug 10 00:53:38.284: INFO: Deleting pod "var-expansion-46563f48-c981-4840-8e86-af25fd02f037" in namespace "var-expansion-4390" Aug 10 00:53:38.288: INFO: Wait up to 5m0s for pod "var-expansion-46563f48-c981-4840-8e86-af25fd02f037" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:54:14.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4390" for this suite. • [SLOW TEST:41.162 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":284,"skipped":4697,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:54:14.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:54:15.010: INFO: Checking APIGroup: apiregistration.k8s.io Aug 10 00:54:15.012: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Aug 10 00:54:15.012: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.012: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Aug 10 00:54:15.012: INFO: Checking APIGroup: extensions Aug 10 00:54:15.013: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Aug 10 00:54:15.013: INFO: Versions found [{extensions/v1beta1 v1beta1}] Aug 10 00:54:15.013: INFO: extensions/v1beta1 matches extensions/v1beta1 Aug 10 00:54:15.013: INFO: Checking APIGroup: apps Aug 10 00:54:15.013: INFO: PreferredVersion.GroupVersion: apps/v1 Aug 10 00:54:15.013: INFO: Versions found [{apps/v1 v1}] Aug 10 00:54:15.013: INFO: apps/v1 matches apps/v1 Aug 10 00:54:15.013: INFO: Checking APIGroup: events.k8s.io Aug 10 00:54:15.014: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Aug 10 00:54:15.014: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.014: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Aug 10 00:54:15.014: INFO: Checking APIGroup: authentication.k8s.io Aug 10 00:54:15.015: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Aug 10 00:54:15.015: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.016: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Aug 10 00:54:15.016: INFO: Checking APIGroup: authorization.k8s.io Aug 10 00:54:15.017: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Aug 10 00:54:15.017: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.017: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Aug 10 00:54:15.017: INFO: Checking APIGroup: autoscaling Aug 10 00:54:15.017: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Aug 10 00:54:15.017: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Aug 10 00:54:15.017: INFO: autoscaling/v1 matches autoscaling/v1 Aug 10 00:54:15.017: INFO: Checking APIGroup: batch Aug 10 00:54:15.018: INFO: PreferredVersion.GroupVersion: batch/v1 Aug 10 00:54:15.018: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Aug 10 00:54:15.018: INFO: batch/v1 matches batch/v1 Aug 10 00:54:15.018: INFO: Checking APIGroup: certificates.k8s.io Aug 10 00:54:15.018: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Aug 10 00:54:15.018: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.019: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Aug 10 00:54:15.019: INFO: Checking APIGroup: networking.k8s.io Aug 10 00:54:15.019: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Aug 10 00:54:15.019: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.019: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Aug 10 00:54:15.019: INFO: Checking APIGroup: policy Aug 10 00:54:15.020: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Aug 10 00:54:15.020: INFO: Versions found [{policy/v1beta1 v1beta1}] Aug 10 00:54:15.020: INFO: policy/v1beta1 matches policy/v1beta1 Aug 10 00:54:15.020: INFO: Checking APIGroup: rbac.authorization.k8s.io Aug 10 00:54:15.021: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Aug 10 00:54:15.021: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.021: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Aug 10 00:54:15.021: INFO: Checking APIGroup: storage.k8s.io Aug 10 00:54:15.022: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Aug 10 00:54:15.022: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.022: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Aug 10 00:54:15.022: INFO: Checking APIGroup: admissionregistration.k8s.io Aug 10 00:54:15.023: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Aug 10 00:54:15.023: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.023: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Aug 10 00:54:15.023: INFO: Checking APIGroup: apiextensions.k8s.io Aug 10 00:54:15.024: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Aug 10 00:54:15.024: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.024: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Aug 10 00:54:15.024: INFO: Checking APIGroup: scheduling.k8s.io Aug 10 00:54:15.025: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Aug 10 00:54:15.025: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.025: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Aug 10 00:54:15.025: INFO: Checking APIGroup: coordination.k8s.io Aug 10 00:54:15.026: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Aug 10 00:54:15.026: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.026: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Aug 10 00:54:15.026: INFO: Checking APIGroup: node.k8s.io Aug 10 00:54:15.026: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Aug 10 00:54:15.026: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.026: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Aug 10 00:54:15.026: INFO: Checking APIGroup: discovery.k8s.io Aug 10 00:54:15.027: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Aug 10 00:54:15.027: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Aug 10 00:54:15.027: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:54:15.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-1039" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":285,"skipped":4711,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:54:15.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 10 00:54:15.148: INFO: Waiting up to 5m0s for pod "pod-6c36a351-7f40-4538-b815-60940f833664" in namespace "emptydir-979" to be "Succeeded or Failed" Aug 10 00:54:15.155: INFO: Pod "pod-6c36a351-7f40-4538-b815-60940f833664": Phase="Pending", Reason="", readiness=false. Elapsed: 6.165833ms Aug 10 00:54:17.167: INFO: Pod "pod-6c36a351-7f40-4538-b815-60940f833664": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018396855s Aug 10 00:54:19.170: INFO: Pod "pod-6c36a351-7f40-4538-b815-60940f833664": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021669485s STEP: Saw pod success Aug 10 00:54:19.170: INFO: Pod "pod-6c36a351-7f40-4538-b815-60940f833664" satisfied condition "Succeeded or Failed" Aug 10 00:54:19.173: INFO: Trying to get logs from node latest-worker2 pod pod-6c36a351-7f40-4538-b815-60940f833664 container test-container: STEP: delete the pod Aug 10 00:54:19.236: INFO: Waiting for pod pod-6c36a351-7f40-4538-b815-60940f833664 to disappear Aug 10 00:54:19.247: INFO: Pod pod-6c36a351-7f40-4538-b815-60940f833664 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:54:19.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-979" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":286,"skipped":4711,"failed":0} SS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:54:19.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Aug 10 00:54:19.969: INFO: starting watch STEP: patching STEP: updating Aug 10 00:54:19.979: INFO: waiting for watch events with expected annotations Aug 10 00:54:19.980: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:54:20.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-9304" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":287,"skipped":4713,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:54:20.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 10 00:54:20.808: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 10 00:54:22.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617660, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617660, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617660, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617660, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 10 00:54:25.908: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:54:25.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6451-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:54:27.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5574" for this suite. STEP: Destroying namespace "webhook-5574-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.191 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":288,"skipped":4718,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:54:27.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Aug 10 00:54:27.417: INFO: Waiting up to 1m0s for all nodes to be ready Aug 10 00:55:27.439: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Aug 10 00:55:27.455: INFO: Created pod: pod0-sched-preemption-low-priority Aug 10 00:55:27.486: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:55:47.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-8883" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:80.308 seconds] [sig-scheduling] SchedulerPreemption [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":289,"skipped":4720,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:55:47.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-84d603b7-a499-474a-bdcd-e5b1d9e7eadd STEP: Creating a pod to test consume configMaps Aug 10 00:55:47.805: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2e3b5f9e-3353-4216-b412-fef46740d07b" in namespace "projected-8443" to be "Succeeded or Failed" Aug 10 00:55:47.820: INFO: Pod "pod-projected-configmaps-2e3b5f9e-3353-4216-b412-fef46740d07b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.078193ms Aug 10 00:55:49.824: INFO: Pod "pod-projected-configmaps-2e3b5f9e-3353-4216-b412-fef46740d07b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019574196s Aug 10 00:55:51.828: INFO: Pod "pod-projected-configmaps-2e3b5f9e-3353-4216-b412-fef46740d07b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023245383s STEP: Saw pod success Aug 10 00:55:51.828: INFO: Pod "pod-projected-configmaps-2e3b5f9e-3353-4216-b412-fef46740d07b" satisfied condition "Succeeded or Failed" Aug 10 00:55:51.831: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-2e3b5f9e-3353-4216-b412-fef46740d07b container projected-configmap-volume-test: STEP: delete the pod Aug 10 00:55:51.901: INFO: Waiting for pod pod-projected-configmaps-2e3b5f9e-3353-4216-b412-fef46740d07b to disappear Aug 10 00:55:51.909: INFO: Pod pod-projected-configmaps-2e3b5f9e-3353-4216-b412-fef46740d07b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:55:51.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8443" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":290,"skipped":4742,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:55:51.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Aug 10 00:55:52.017: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ba0d4937-3d37-478c-aaf8-f58e28ff5c6c" in namespace "downward-api-8948" to be "Succeeded or Failed" Aug 10 00:55:52.024: INFO: Pod "downwardapi-volume-ba0d4937-3d37-478c-aaf8-f58e28ff5c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.378349ms Aug 10 00:55:54.071: INFO: Pod "downwardapi-volume-ba0d4937-3d37-478c-aaf8-f58e28ff5c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054114211s Aug 10 00:55:56.075: INFO: Pod "downwardapi-volume-ba0d4937-3d37-478c-aaf8-f58e28ff5c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057391813s Aug 10 00:55:58.078: INFO: Pod "downwardapi-volume-ba0d4937-3d37-478c-aaf8-f58e28ff5c6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060833444s STEP: Saw pod success Aug 10 00:55:58.078: INFO: Pod "downwardapi-volume-ba0d4937-3d37-478c-aaf8-f58e28ff5c6c" satisfied condition "Succeeded or Failed" Aug 10 00:55:58.080: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ba0d4937-3d37-478c-aaf8-f58e28ff5c6c container client-container: STEP: delete the pod Aug 10 00:55:58.117: INFO: Waiting for pod downwardapi-volume-ba0d4937-3d37-478c-aaf8-f58e28ff5c6c to disappear Aug 10 00:55:58.186: INFO: Pod downwardapi-volume-ba0d4937-3d37-478c-aaf8-f58e28ff5c6c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:55:58.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8948" for this suite. • [SLOW TEST:6.276 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":291,"skipped":4753,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:55:58.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-ee124e78-4909-4e26-b99c-57d2edebf55d STEP: Creating a pod to test consume secrets Aug 10 00:55:58.271: INFO: Waiting up to 5m0s for pod "pod-secrets-02fb48cb-7afd-40e6-9c84-856498623d02" in namespace "secrets-397" to be "Succeeded or Failed" Aug 10 00:55:58.324: INFO: Pod "pod-secrets-02fb48cb-7afd-40e6-9c84-856498623d02": Phase="Pending", Reason="", readiness=false. Elapsed: 52.345201ms Aug 10 00:56:00.328: INFO: Pod "pod-secrets-02fb48cb-7afd-40e6-9c84-856498623d02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056581529s Aug 10 00:56:02.333: INFO: Pod "pod-secrets-02fb48cb-7afd-40e6-9c84-856498623d02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061328783s STEP: Saw pod success Aug 10 00:56:02.333: INFO: Pod "pod-secrets-02fb48cb-7afd-40e6-9c84-856498623d02" satisfied condition "Succeeded or Failed" Aug 10 00:56:02.336: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-02fb48cb-7afd-40e6-9c84-856498623d02 container secret-volume-test: STEP: delete the pod Aug 10 00:56:02.350: INFO: Waiting for pod pod-secrets-02fb48cb-7afd-40e6-9c84-856498623d02 to disappear Aug 10 00:56:02.355: INFO: Pod pod-secrets-02fb48cb-7afd-40e6-9c84-856498623d02 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:56:02.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-397" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":292,"skipped":4753,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:56:02.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-08d9805c-665f-40bd-94c0-4054fa85e9e1 STEP: Creating a pod to test consume configMaps Aug 10 00:56:02.526: INFO: Waiting up to 5m0s for pod "pod-configmaps-6ce800a2-ac40-434d-b755-2f489662e644" in namespace "configmap-8704" to be "Succeeded or Failed" Aug 10 00:56:02.618: INFO: Pod "pod-configmaps-6ce800a2-ac40-434d-b755-2f489662e644": Phase="Pending", Reason="", readiness=false. Elapsed: 91.528854ms Aug 10 00:56:04.696: INFO: Pod "pod-configmaps-6ce800a2-ac40-434d-b755-2f489662e644": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169982369s Aug 10 00:56:06.701: INFO: Pod "pod-configmaps-6ce800a2-ac40-434d-b755-2f489662e644": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174522018s STEP: Saw pod success Aug 10 00:56:06.701: INFO: Pod "pod-configmaps-6ce800a2-ac40-434d-b755-2f489662e644" satisfied condition "Succeeded or Failed" Aug 10 00:56:06.704: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-6ce800a2-ac40-434d-b755-2f489662e644 container configmap-volume-test: STEP: delete the pod Aug 10 00:56:06.869: INFO: Waiting for pod pod-configmaps-6ce800a2-ac40-434d-b755-2f489662e644 to disappear Aug 10 00:56:06.872: INFO: Pod pod-configmaps-6ce800a2-ac40-434d-b755-2f489662e644 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:56:06.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8704" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":293,"skipped":4761,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:56:06.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Aug 10 00:56:07.028: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 10 00:56:07.036: INFO: Waiting for terminating namespaces to be deleted... Aug 10 00:56:07.038: INFO: Logging pods the apiserver thinks is on node latest-worker before test Aug 10 00:56:07.042: INFO: coredns-f9fd979d6-s745j from kube-system started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 10 00:56:07.042: INFO: Container coredns ready: true, restart count 0 Aug 10 00:56:07.042: INFO: coredns-f9fd979d6-zs4sj from kube-system started at 2020-07-19 21:39:36 +0000 UTC (1 container statuses recorded) Aug 10 00:56:07.042: INFO: Container coredns ready: true, restart count 0 Aug 10 00:56:07.042: INFO: kindnet-46dnt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 10 00:56:07.042: INFO: Container kindnet-cni ready: true, restart count 0 Aug 10 00:56:07.042: INFO: kube-proxy-sxpg9 from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 10 00:56:07.042: INFO: Container kube-proxy ready: true, restart count 0 Aug 10 00:56:07.042: INFO: local-path-provisioner-8b46957d4-2gzpd from local-path-storage started at 2020-07-19 21:39:25 +0000 UTC (1 container statuses recorded) Aug 10 00:56:07.042: INFO: Container local-path-provisioner ready: true, restart count 0 Aug 10 00:56:07.042: INFO: Logging pods the apiserver thinks is on node latest-worker2 before test Aug 10 00:56:07.046: INFO: kindnet-g6zbt from kube-system started at 2020-07-19 21:38:46 +0000 UTC (1 container statuses recorded) Aug 10 00:56:07.046: INFO: Container kindnet-cni ready: true, restart count 0 Aug 10 00:56:07.046: INFO: kube-proxy-nsnzn from kube-system started at 2020-07-19 21:38:45 +0000 UTC (1 container statuses recorded) Aug 10 00:56:07.046: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1629c21bb0c782d7], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1629c21bb28f2705], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:56:08.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9730" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":294,"skipped":4787,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:56:08.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8067/configmap-test-bb3eec01-9b6c-43e3-a0c1-9bbb90535081 STEP: Creating a pod to test consume configMaps Aug 10 00:56:08.156: INFO: Waiting up to 5m0s for pod "pod-configmaps-3136ef84-faa6-4786-8edd-f2d77f172e59" in namespace "configmap-8067" to be "Succeeded or Failed" Aug 10 00:56:08.173: INFO: Pod "pod-configmaps-3136ef84-faa6-4786-8edd-f2d77f172e59": Phase="Pending", Reason="", readiness=false. Elapsed: 17.128058ms Aug 10 00:56:10.210: INFO: Pod "pod-configmaps-3136ef84-faa6-4786-8edd-f2d77f172e59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054073315s Aug 10 00:56:12.213: INFO: Pod "pod-configmaps-3136ef84-faa6-4786-8edd-f2d77f172e59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056847099s STEP: Saw pod success Aug 10 00:56:12.213: INFO: Pod "pod-configmaps-3136ef84-faa6-4786-8edd-f2d77f172e59" satisfied condition "Succeeded or Failed" Aug 10 00:56:12.215: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-3136ef84-faa6-4786-8edd-f2d77f172e59 container env-test: STEP: delete the pod Aug 10 00:56:12.259: INFO: Waiting for pod pod-configmaps-3136ef84-faa6-4786-8edd-f2d77f172e59 to disappear Aug 10 00:56:12.272: INFO: Pod pod-configmaps-3136ef84-faa6-4786-8edd-f2d77f172e59 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:56:12.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8067" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":295,"skipped":4793,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:56:12.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Aug 10 00:56:12.352: INFO: >>> kubeConfig: /root/.kube/config Aug 10 00:56:15.339: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:56:26.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4630" for this suite. • [SLOW TEST:14.093 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":296,"skipped":4803,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:56:26.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:56:31.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-194" for this suite. • [SLOW TEST:5.136 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":297,"skipped":4808,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:56:31.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Aug 10 00:56:36.199: INFO: Successfully updated pod "labelsupdatee4c8cc7a-4e4a-46c7-a461-315db2612af1" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:56:40.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2323" for this suite. • [SLOW TEST:8.727 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":298,"skipped":4833,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:56:40.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-2jgm STEP: Creating a pod to test atomic-volume-subpath Aug 10 00:56:40.365: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2jgm" in namespace "subpath-5830" to be "Succeeded or Failed" Aug 10 00:56:40.372: INFO: Pod "pod-subpath-test-secret-2jgm": Phase="Pending", Reason="", readiness=false. Elapsed: 7.001961ms Aug 10 00:56:42.376: INFO: Pod "pod-subpath-test-secret-2jgm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011670265s Aug 10 00:56:44.379: INFO: Pod "pod-subpath-test-secret-2jgm": Phase="Running", Reason="", readiness=true. Elapsed: 4.01478652s Aug 10 00:56:46.391: INFO: Pod "pod-subpath-test-secret-2jgm": Phase="Running", Reason="", readiness=true. Elapsed: 6.026494636s Aug 10 00:56:48.395: INFO: Pod "pod-subpath-test-secret-2jgm": Phase="Running", Reason="", readiness=true. Elapsed: 8.030543635s Aug 10 00:56:50.399: INFO: Pod "pod-subpath-test-secret-2jgm": Phase="Running", Reason="", readiness=true. Elapsed: 10.034530158s Aug 10 00:56:52.404: INFO: Pod "pod-subpath-test-secret-2jgm": Phase="Running", Reason="", readiness=true. Elapsed: 12.039326904s Aug 10 00:56:54.409: INFO: Pod "pod-subpath-test-secret-2jgm": Phase="Running", Reason="", readiness=true. Elapsed: 14.044176323s Aug 10 00:56:56.413: INFO: Pod "pod-subpath-test-secret-2jgm": Phase="Running", Reason="", readiness=true. Elapsed: 16.048304949s Aug 10 00:56:58.417: INFO: Pod "pod-subpath-test-secret-2jgm": Phase="Running", Reason="", readiness=true. Elapsed: 18.05258016s Aug 10 00:57:00.422: INFO: Pod "pod-subpath-test-secret-2jgm": Phase="Running", Reason="", readiness=true. Elapsed: 20.056996007s Aug 10 00:57:02.426: INFO: Pod "pod-subpath-test-secret-2jgm": Phase="Running", Reason="", readiness=true. Elapsed: 22.060835098s Aug 10 00:57:04.430: INFO: Pod "pod-subpath-test-secret-2jgm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.065695356s STEP: Saw pod success Aug 10 00:57:04.430: INFO: Pod "pod-subpath-test-secret-2jgm" satisfied condition "Succeeded or Failed" Aug 10 00:57:04.434: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-2jgm container test-container-subpath-secret-2jgm: STEP: delete the pod Aug 10 00:57:04.524: INFO: Waiting for pod pod-subpath-test-secret-2jgm to disappear Aug 10 00:57:04.574: INFO: Pod pod-subpath-test-secret-2jgm no longer exists STEP: Deleting pod pod-subpath-test-secret-2jgm Aug 10 00:57:04.574: INFO: Deleting pod "pod-subpath-test-secret-2jgm" in namespace "subpath-5830" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:57:04.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5830" for this suite. • [SLOW TEST:24.351 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":299,"skipped":4838,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:57:04.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:57:04.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5419" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":300,"skipped":4841,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:57:04.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-19e3e9ea-dbe7-4592-a5e2-1aebdda3ef9c STEP: Creating a pod to test consume configMaps Aug 10 00:57:04.923: INFO: Waiting up to 5m0s for pod "pod-configmaps-421711e4-9c42-4e06-aafc-9e2e4e7b1708" in namespace "configmap-9275" to be "Succeeded or Failed" Aug 10 00:57:04.944: INFO: Pod "pod-configmaps-421711e4-9c42-4e06-aafc-9e2e4e7b1708": Phase="Pending", Reason="", readiness=false. Elapsed: 21.49729ms Aug 10 00:57:06.949: INFO: Pod "pod-configmaps-421711e4-9c42-4e06-aafc-9e2e4e7b1708": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026472735s Aug 10 00:57:08.953: INFO: Pod "pod-configmaps-421711e4-9c42-4e06-aafc-9e2e4e7b1708": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030605307s STEP: Saw pod success Aug 10 00:57:08.953: INFO: Pod "pod-configmaps-421711e4-9c42-4e06-aafc-9e2e4e7b1708" satisfied condition "Succeeded or Failed" Aug 10 00:57:08.956: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-421711e4-9c42-4e06-aafc-9e2e4e7b1708 container configmap-volume-test: STEP: delete the pod Aug 10 00:57:08.986: INFO: Waiting for pod pod-configmaps-421711e4-9c42-4e06-aafc-9e2e4e7b1708 to disappear Aug 10 00:57:08.994: INFO: Pod pod-configmaps-421711e4-9c42-4e06-aafc-9e2e4e7b1708 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:57:08.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9275" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":301,"skipped":4856,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:57:09.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:77 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Aug 10 00:57:09.113: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 10 00:57:09.146: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 10 00:57:14.149: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 10 00:57:14.149: INFO: Creating deployment "test-rolling-update-deployment" Aug 10 00:57:14.161: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 10 00:57:14.173: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 10 00:57:16.215: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 10 00:57:16.281: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617834, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617834, loc:(*time.Location)(0x7e34b60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617834, loc:(*time.Location)(0x7e34b60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732617834, loc:(*time.Location)(0x7e34b60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 10 00:57:18.295: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:71 Aug 10 00:57:18.302: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3522 /apis/apps/v1/namespaces/deployment-3522/deployments/test-rolling-update-deployment 8adb54e8-aaf6-4529-88bf-e7da3e319ae9 5798822 1 2020-08-10 00:57:14 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-08-10 00:57:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-10 00:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00665cd78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-10 00:57:14 +0000 UTC,LastTransitionTime:2020-08-10 00:57:14 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-08-10 00:57:17 +0000 UTC,LastTransitionTime:2020-08-10 00:57:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Aug 10 00:57:18.305: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-3522 /apis/apps/v1/namespaces/deployment-3522/replicasets/test-rolling-update-deployment-c4cb8d6d9 6208212d-b542-4c8a-9785-84894560ad4c 5798811 1 2020-08-10 00:57:14 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 8adb54e8-aaf6-4529-88bf-e7da3e319ae9 0xc00665d3e0 0xc00665d3e1}] [] [{kube-controller-manager Update apps/v1 2020-08-10 00:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8adb54e8-aaf6-4529-88bf-e7da3e319ae9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00665d478 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 10 00:57:18.305: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 10 00:57:18.305: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3522 /apis/apps/v1/namespaces/deployment-3522/replicasets/test-rolling-update-controller 12b77ad0-302c-436f-bbe3-3fe33c613ba0 5798821 2 2020-08-10 00:57:09 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 8adb54e8-aaf6-4529-88bf-e7da3e319ae9 0xc00665d2c7 0xc00665d2c8}] [] [{e2e.test Update apps/v1 2020-08-10 00:57:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-08-10 00:57:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8adb54e8-aaf6-4529-88bf-e7da3e319ae9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00665d378 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 10 00:57:18.308: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-hhmqk" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-hhmqk test-rolling-update-deployment-c4cb8d6d9- deployment-3522 /api/v1/namespaces/deployment-3522/pods/test-rolling-update-deployment-c4cb8d6d9-hhmqk 5ddb7538-07ed-4dbb-a63b-a7403f6ad2b3 5798810 0 2020-08-10 00:57:14 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 6208212d-b542-4c8a-9785-84894560ad4c 0xc00665db40 0xc00665db41}] [] [{kube-controller-manager Update v1 2020-08-10 00:57:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6208212d-b542-4c8a-9785-84894560ad4c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-08-10 00:57:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.144\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fnt9g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fnt9g,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fnt9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:57:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:57:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:57:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-10 00:57:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.144,StartTime:2020-08-10 00:57:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-10 00:57:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://979bfe895078d5dba0c540688ba8268a15aaf15eca8507526f991b21ba3f2f18,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.144,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 00:57:18.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3522" for this suite. • [SLOW TEST:9.313 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":302,"skipped":4872,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Aug 10 00:57:18.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-1638b93a-971f-49b1-9b60-d87b7e8553c8 in namespace container-probe-8488 Aug 10 00:57:22.531: INFO: Started pod test-webserver-1638b93a-971f-49b1-9b60-d87b7e8553c8 in namespace container-probe-8488 STEP: checking the pod's current state and verifying that restartCount is present Aug 10 00:57:22.534: INFO: Initial restart count of pod test-webserver-1638b93a-971f-49b1-9b60-d87b7e8553c8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Aug 10 01:01:23.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8488" for this suite. • [SLOW TEST:245.097 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":303,"skipped":4896,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 10 01:01:23.414: INFO: Running AfterSuite actions on all nodes Aug 10 01:01:23.414: INFO: Running AfterSuite actions on node 1 Aug 10 01:01:23.414: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4935,"failed":0} Ran 303 of 5238 Specs in 6062.762 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4935 Skipped PASS