I0312 23:35:41.507442 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0312 23:35:41.507595 7 e2e.go:124] Starting e2e run "4114b614-3358-44c1-8546-4721f3a73760" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584056140 - Will randomize all specs Will run 275 of 4992 specs Mar 12 23:35:41.556: INFO: >>> kubeConfig: /root/.kube/config Mar 12 23:35:41.558: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 12 23:35:41.574: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 12 23:35:41.608: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 12 23:35:41.609: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 12 23:35:41.609: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 12 23:35:41.614: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 12 23:35:41.614: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 12 23:35:41.614: INFO: e2e test version: v1.19.0-alpha.0.749+55bb72b77444f7 Mar 12 23:35:41.615: INFO: kube-apiserver version: v1.17.0 Mar 12 23:35:41.615: INFO: >>> kubeConfig: /root/.kube/config Mar 12 23:35:41.617: INFO: Cluster IP family: ipv4 [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:35:41.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota Mar 12 23:35:41.692: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:35:52.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9259" for this suite. • [SLOW TEST:11.158 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":1,"skipped":0,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:35:52.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:35:52.867: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 18.148432ms) Mar 12 23:35:52.871: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.727334ms) Mar 12 23:35:52.874: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.154425ms) Mar 12 23:35:52.877: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.121817ms) Mar 12 23:35:52.881: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.437793ms) Mar 12 23:35:52.884: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.145185ms) Mar 12 23:35:52.887: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.957733ms) Mar 12 23:35:52.890: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.948349ms) Mar 12 23:35:52.893: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.999448ms) Mar 12 23:35:52.896: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.808647ms) Mar 12 23:35:52.899: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.873977ms) Mar 12 23:35:52.902: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.791922ms) Mar 12 23:35:52.904: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.882991ms) Mar 12 23:35:52.907: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.929892ms) Mar 12 23:35:52.910: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.609308ms) Mar 12 23:35:52.913: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.7876ms) Mar 12 23:35:52.915: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.324107ms) Mar 12 23:35:52.918: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.823278ms) Mar 12 23:35:52.921: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.639794ms) Mar 12 23:35:52.924: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.399338ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:35:52.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4812" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":2,"skipped":53,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:35:52.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:35:53.053: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 12 23:35:55.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3175 create -f -' Mar 12 23:35:57.963: INFO: stderr: "" Mar 12 23:35:57.963: INFO: stdout: "e2e-test-crd-publish-openapi-792-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 12 23:35:57.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3175 delete e2e-test-crd-publish-openapi-792-crds test-cr' Mar 12 23:35:58.090: INFO: stderr: "" Mar 12 23:35:58.090: INFO: stdout: "e2e-test-crd-publish-openapi-792-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 12 23:35:58.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3175 apply -f -' Mar 12 23:35:58.314: INFO: stderr: "" Mar 12 23:35:58.314: INFO: stdout: "e2e-test-crd-publish-openapi-792-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 12 23:35:58.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3175 delete e2e-test-crd-publish-openapi-792-crds test-cr' Mar 12 23:35:58.406: INFO: stderr: "" Mar 12 23:35:58.406: INFO: stdout: "e2e-test-crd-publish-openapi-792-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 12 23:35:58.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-792-crds' Mar 12 23:35:58.604: INFO: stderr: "" Mar 12 23:35:58.604: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-792-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:36:01.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3175" for this suite. • [SLOW TEST:8.612 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":3,"skipped":61,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:36:01.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics W0312 23:36:07.627991 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 23:36:07.628: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:36:07.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9000" for this suite. • [SLOW TEST:6.089 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":4,"skipped":105,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:36:07.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 12 23:36:07.672: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:36:11.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2373" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":5,"skipped":115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:36:11.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:36:11.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1753" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":141,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:36:11.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6171 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6171 STEP: Creating statefulset with conflicting port in namespace statefulset-6171 STEP: Waiting until pod test-pod will start running in namespace statefulset-6171 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6171 Mar 12 23:36:13.404: INFO: Observed stateful pod in namespace: statefulset-6171, name: ss-0, uid: fa4cb0e0-6551-4ce4-a532-8fe5f8ca2b87, status phase: Pending. Waiting for statefulset controller to delete. Mar 12 23:36:22.461: INFO: Observed stateful pod in namespace: statefulset-6171, name: ss-0, uid: fa4cb0e0-6551-4ce4-a532-8fe5f8ca2b87, status phase: Failed. Waiting for statefulset controller to delete. Mar 12 23:36:22.465: INFO: Observed stateful pod in namespace: statefulset-6171, name: ss-0, uid: fa4cb0e0-6551-4ce4-a532-8fe5f8ca2b87, status phase: Failed. Waiting for statefulset controller to delete. Mar 12 23:36:22.506: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6171 STEP: Removing pod with conflicting port in namespace statefulset-6171 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6171 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 12 23:36:24.596: INFO: Deleting all statefulset in ns statefulset-6171 Mar 12 23:36:24.598: INFO: Scaling statefulset ss to 0 Mar 12 23:36:34.626: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 23:36:34.628: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:36:34.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6171" for this suite. • [SLOW TEST:23.334 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":7,"skipped":143,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:36:34.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 12 23:36:35.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4344' Mar 12 23:36:35.585: INFO: stderr: "" Mar 12 23:36:35.585: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 12 23:36:36.588: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 23:36:36.588: INFO: Found 0 / 1 Mar 12 23:36:37.589: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 23:36:37.589: INFO: Found 1 / 1 Mar 12 23:36:37.589: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 12 23:36:37.592: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 23:36:37.592: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 12 23:36:37.592: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config patch pod agnhost-master-csb26 --namespace=kubectl-4344 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 12 23:36:37.705: INFO: stderr: "" Mar 12 23:36:37.705: INFO: stdout: "pod/agnhost-master-csb26 patched\n" STEP: checking annotations Mar 12 23:36:37.711: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 23:36:37.711: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:36:37.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4344" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":8,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:36:37.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 12 23:36:37.791: INFO: Waiting up to 5m0s for pod "downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7" in namespace "downward-api-8536" to be "Succeeded or Failed" Mar 12 23:36:37.795: INFO: Pod "downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.762187ms Mar 12 23:36:39.824: INFO: Pod "downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.032533752s STEP: Saw pod success Mar 12 23:36:39.824: INFO: Pod "downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7" satisfied condition "Succeeded or Failed" Mar 12 23:36:39.837: INFO: Trying to get logs from node latest-worker2 pod downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7 container dapi-container: STEP: delete the pod Mar 12 23:36:39.943: INFO: Waiting for pod downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7 to disappear Mar 12 23:36:39.951: INFO: Pod downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:36:39.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8536" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":189,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:36:39.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 12 23:36:46.119: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 12 23:36:46.127: INFO: Pod pod-with-prestop-http-hook still exists Mar 12 23:36:48.127: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 12 23:36:48.130: INFO: Pod pod-with-prestop-http-hook still exists Mar 12 23:36:50.127: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 12 23:36:50.130: INFO: Pod pod-with-prestop-http-hook still exists Mar 12 23:36:52.127: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 12 23:36:52.130: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:36:52.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9472" for this suite. • [SLOW TEST:12.192 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":198,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:36:52.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2475 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 12 23:36:52.240: INFO: Found 0 stateful pods, waiting for 3 Mar 12 23:37:02.250: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 23:37:02.250: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 23:37:02.250: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 12 23:37:02.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2475 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 23:37:02.462: INFO: stderr: "I0312 23:37:02.380130 188 log.go:172] (0xc000b71290) (0xc000b46500) Create stream\nI0312 23:37:02.380170 188 log.go:172] (0xc000b71290) (0xc000b46500) Stream added, broadcasting: 1\nI0312 23:37:02.383796 188 log.go:172] (0xc000b71290) Reply frame received for 1\nI0312 23:37:02.383855 188 log.go:172] (0xc000b71290) (0xc000b14280) Create stream\nI0312 23:37:02.383872 188 log.go:172] (0xc000b71290) (0xc000b14280) Stream added, broadcasting: 3\nI0312 23:37:02.385145 188 log.go:172] (0xc000b71290) Reply frame received for 3\nI0312 23:37:02.385179 188 log.go:172] (0xc000b71290) (0xc000b465a0) Create stream\nI0312 23:37:02.385192 188 log.go:172] (0xc000b71290) (0xc000b465a0) Stream added, broadcasting: 5\nI0312 23:37:02.386343 188 log.go:172] (0xc000b71290) Reply frame received for 5\nI0312 23:37:02.440452 188 log.go:172] (0xc000b71290) Data frame received for 5\nI0312 23:37:02.440479 188 log.go:172] (0xc000b465a0) (5) Data frame handling\nI0312 23:37:02.440488 188 log.go:172] (0xc000b465a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:37:02.457621 188 log.go:172] (0xc000b71290) Data frame received for 3\nI0312 23:37:02.457643 188 log.go:172] (0xc000b14280) (3) Data frame handling\nI0312 23:37:02.457662 188 log.go:172] (0xc000b14280) (3) Data frame sent\nI0312 23:37:02.457671 188 log.go:172] (0xc000b71290) Data frame received for 3\nI0312 23:37:02.457679 188 log.go:172] (0xc000b14280) (3) Data frame handling\nI0312 23:37:02.458085 188 log.go:172] (0xc000b71290) Data frame received for 5\nI0312 23:37:02.458103 188 log.go:172] (0xc000b465a0) (5) Data frame handling\nI0312 23:37:02.459424 188 log.go:172] (0xc000b71290) Data frame received for 1\nI0312 23:37:02.459442 188 log.go:172] (0xc000b46500) (1) Data frame handling\nI0312 23:37:02.459449 188 log.go:172] (0xc000b46500) (1) Data frame sent\nI0312 23:37:02.459458 188 log.go:172] (0xc000b71290) (0xc000b46500) Stream removed, broadcasting: 1\nI0312 23:37:02.459475 188 log.go:172] (0xc000b71290) Go away received\nI0312 23:37:02.459723 188 log.go:172] (0xc000b71290) (0xc000b46500) Stream removed, broadcasting: 1\nI0312 23:37:02.459737 188 log.go:172] (0xc000b71290) (0xc000b14280) Stream removed, broadcasting: 3\nI0312 23:37:02.459743 188 log.go:172] (0xc000b71290) (0xc000b465a0) Stream removed, broadcasting: 5\n" Mar 12 23:37:02.462: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 23:37:02.462: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 12 23:37:12.498: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 12 23:37:22.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2475 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 23:37:22.758: INFO: stderr: "I0312 23:37:22.679423 209 log.go:172] (0xc000a5d760) (0xc000ad2960) Create stream\nI0312 23:37:22.679473 209 log.go:172] (0xc000a5d760) (0xc000ad2960) Stream added, broadcasting: 1\nI0312 23:37:22.683503 209 log.go:172] (0xc000a5d760) Reply frame received for 1\nI0312 23:37:22.683535 209 log.go:172] (0xc000a5d760) (0xc0005817c0) Create stream\nI0312 23:37:22.683543 209 log.go:172] (0xc000a5d760) (0xc0005817c0) Stream added, broadcasting: 3\nI0312 23:37:22.684337 209 log.go:172] (0xc000a5d760) Reply frame received for 3\nI0312 23:37:22.684368 209 log.go:172] (0xc000a5d760) (0xc000442be0) Create stream\nI0312 23:37:22.684381 209 log.go:172] (0xc000a5d760) (0xc000442be0) Stream added, broadcasting: 5\nI0312 23:37:22.685242 209 log.go:172] (0xc000a5d760) Reply frame received for 5\nI0312 23:37:22.753171 209 log.go:172] (0xc000a5d760) Data frame received for 3\nI0312 23:37:22.753204 209 log.go:172] (0xc0005817c0) (3) Data frame handling\nI0312 23:37:22.753219 209 log.go:172] (0xc0005817c0) (3) Data frame sent\nI0312 23:37:22.753324 209 log.go:172] (0xc000a5d760) Data frame received for 5\nI0312 23:37:22.753335 209 log.go:172] (0xc000442be0) (5) Data frame handling\nI0312 23:37:22.753343 209 log.go:172] (0xc000442be0) (5) Data frame sent\nI0312 23:37:22.753355 209 log.go:172] (0xc000a5d760) Data frame received for 5\nI0312 23:37:22.753361 209 log.go:172] (0xc000442be0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 23:37:22.753393 209 log.go:172] (0xc000a5d760) Data frame received for 3\nI0312 23:37:22.753416 209 log.go:172] (0xc0005817c0) (3) Data frame handling\nI0312 23:37:22.754455 209 log.go:172] (0xc000a5d760) Data frame received for 1\nI0312 23:37:22.754481 209 log.go:172] (0xc000ad2960) (1) Data frame handling\nI0312 23:37:22.754500 209 log.go:172] (0xc000ad2960) (1) Data frame sent\nI0312 23:37:22.754511 209 log.go:172] (0xc000a5d760) (0xc000ad2960) Stream removed, broadcasting: 1\nI0312 23:37:22.754533 209 log.go:172] (0xc000a5d760) Go away received\nI0312 23:37:22.754852 209 log.go:172] (0xc000a5d760) (0xc000ad2960) Stream removed, broadcasting: 1\nI0312 23:37:22.754878 209 log.go:172] (0xc000a5d760) (0xc0005817c0) Stream removed, broadcasting: 3\nI0312 23:37:22.754886 209 log.go:172] (0xc000a5d760) (0xc000442be0) Stream removed, broadcasting: 5\n" Mar 12 23:37:22.758: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 23:37:22.758: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 23:37:32.776: INFO: Waiting for StatefulSet statefulset-2475/ss2 to complete update Mar 12 23:37:32.776: INFO: Waiting for Pod statefulset-2475/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 12 23:37:42.782: INFO: Waiting for StatefulSet statefulset-2475/ss2 to complete update STEP: Rolling back to a previous revision Mar 12 23:37:52.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2475 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 23:37:53.031: INFO: stderr: "I0312 23:37:52.925928 230 log.go:172] (0xc000ad7340) (0xc000b22820) Create stream\nI0312 23:37:52.925978 230 log.go:172] (0xc000ad7340) (0xc000b22820) Stream added, broadcasting: 1\nI0312 23:37:52.930036 230 log.go:172] (0xc000ad7340) Reply frame received for 1\nI0312 23:37:52.930078 230 log.go:172] (0xc000ad7340) (0xc00068f680) Create stream\nI0312 23:37:52.930086 230 log.go:172] (0xc000ad7340) (0xc00068f680) Stream added, broadcasting: 3\nI0312 23:37:52.930925 230 log.go:172] (0xc000ad7340) Reply frame received for 3\nI0312 23:37:52.930956 230 log.go:172] (0xc000ad7340) (0xc000538aa0) Create stream\nI0312 23:37:52.930969 230 log.go:172] (0xc000ad7340) (0xc000538aa0) Stream added, broadcasting: 5\nI0312 23:37:52.931769 230 log.go:172] (0xc000ad7340) Reply frame received for 5\nI0312 23:37:53.009572 230 log.go:172] (0xc000ad7340) Data frame received for 5\nI0312 23:37:53.009597 230 log.go:172] (0xc000538aa0) (5) Data frame handling\nI0312 23:37:53.009612 230 log.go:172] (0xc000538aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:37:53.026194 230 log.go:172] (0xc000ad7340) Data frame received for 5\nI0312 23:37:53.026211 230 log.go:172] (0xc000538aa0) (5) Data frame handling\nI0312 23:37:53.026236 230 log.go:172] (0xc000ad7340) Data frame received for 3\nI0312 23:37:53.026250 230 log.go:172] (0xc00068f680) (3) Data frame handling\nI0312 23:37:53.026267 230 log.go:172] (0xc00068f680) (3) Data frame sent\nI0312 23:37:53.026281 230 log.go:172] (0xc000ad7340) Data frame received for 3\nI0312 23:37:53.026294 230 log.go:172] (0xc00068f680) (3) Data frame handling\nI0312 23:37:53.027920 230 log.go:172] (0xc000ad7340) Data frame received for 1\nI0312 23:37:53.027940 230 log.go:172] (0xc000b22820) (1) Data frame handling\nI0312 23:37:53.027960 230 log.go:172] (0xc000b22820) (1) Data frame sent\nI0312 23:37:53.027976 230 log.go:172] (0xc000ad7340) (0xc000b22820) Stream removed, broadcasting: 1\nI0312 23:37:53.027998 230 log.go:172] (0xc000ad7340) Go away received\nI0312 23:37:53.028324 230 log.go:172] (0xc000ad7340) (0xc000b22820) Stream removed, broadcasting: 1\nI0312 23:37:53.028343 230 log.go:172] (0xc000ad7340) (0xc00068f680) Stream removed, broadcasting: 3\nI0312 23:37:53.028352 230 log.go:172] (0xc000ad7340) (0xc000538aa0) Stream removed, broadcasting: 5\n" Mar 12 23:37:53.031: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 23:37:53.032: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 23:38:03.064: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 12 23:38:13.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2475 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 23:38:13.299: INFO: stderr: "I0312 23:38:13.233390 252 log.go:172] (0xc000a35080) (0xc0009528c0) Create stream\nI0312 23:38:13.233432 252 log.go:172] (0xc000a35080) (0xc0009528c0) Stream added, broadcasting: 1\nI0312 23:38:13.237284 252 log.go:172] (0xc000a35080) Reply frame received for 1\nI0312 23:38:13.237311 252 log.go:172] (0xc000a35080) (0xc0007d9540) Create stream\nI0312 23:38:13.237317 252 log.go:172] (0xc000a35080) (0xc0007d9540) Stream added, broadcasting: 3\nI0312 23:38:13.238593 252 log.go:172] (0xc000a35080) Reply frame received for 3\nI0312 23:38:13.238645 252 log.go:172] (0xc000a35080) (0xc000608960) Create stream\nI0312 23:38:13.238657 252 log.go:172] (0xc000a35080) (0xc000608960) Stream added, broadcasting: 5\nI0312 23:38:13.241119 252 log.go:172] (0xc000a35080) Reply frame received for 5\nI0312 23:38:13.295265 252 log.go:172] (0xc000a35080) Data frame received for 3\nI0312 23:38:13.295296 252 log.go:172] (0xc000a35080) Data frame received for 5\nI0312 23:38:13.295314 252 log.go:172] (0xc000608960) (5) Data frame handling\nI0312 23:38:13.295325 252 log.go:172] (0xc000608960) (5) Data frame sent\nI0312 23:38:13.295331 252 log.go:172] (0xc000a35080) Data frame received for 5\nI0312 23:38:13.295338 252 log.go:172] (0xc000608960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 23:38:13.295355 252 log.go:172] (0xc0007d9540) (3) Data frame handling\nI0312 23:38:13.295361 252 log.go:172] (0xc0007d9540) (3) Data frame sent\nI0312 23:38:13.295365 252 log.go:172] (0xc000a35080) Data frame received for 3\nI0312 23:38:13.295370 252 log.go:172] (0xc0007d9540) (3) Data frame handling\nI0312 23:38:13.296628 252 log.go:172] (0xc000a35080) Data frame received for 1\nI0312 23:38:13.296642 252 log.go:172] (0xc0009528c0) (1) Data frame handling\nI0312 23:38:13.296648 252 log.go:172] (0xc0009528c0) (1) Data frame sent\nI0312 23:38:13.296656 252 log.go:172] (0xc000a35080) (0xc0009528c0) Stream removed, broadcasting: 1\nI0312 23:38:13.296667 252 log.go:172] (0xc000a35080) Go away received\nI0312 23:38:13.296989 252 log.go:172] (0xc000a35080) (0xc0009528c0) Stream removed, broadcasting: 1\nI0312 23:38:13.297008 252 log.go:172] (0xc000a35080) (0xc0007d9540) Stream removed, broadcasting: 3\nI0312 23:38:13.297015 252 log.go:172] (0xc000a35080) (0xc000608960) Stream removed, broadcasting: 5\n" Mar 12 23:38:13.299: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 23:38:13.299: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 23:38:23.315: INFO: Waiting for StatefulSet statefulset-2475/ss2 to complete update Mar 12 23:38:23.315: INFO: Waiting for Pod statefulset-2475/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 12 23:38:23.315: INFO: Waiting for Pod statefulset-2475/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 12 23:38:33.321: INFO: Waiting for StatefulSet statefulset-2475/ss2 to complete update Mar 12 23:38:33.321: INFO: Waiting for Pod statefulset-2475/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 12 23:38:43.320: INFO: Waiting for StatefulSet statefulset-2475/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 12 23:38:53.321: INFO: Deleting all statefulset in ns statefulset-2475 Mar 12 23:38:53.322: INFO: Scaling statefulset ss2 to 0 Mar 12 23:39:23.341: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 23:39:23.344: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:39:23.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2475" for this suite. • [SLOW TEST:151.213 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":11,"skipped":257,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:39:23.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 12 23:39:23.448: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc" in namespace "downward-api-7529" to be "Succeeded or Failed" Mar 12 23:39:23.463: INFO: Pod "downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.786688ms Mar 12 23:39:25.466: INFO: Pod "downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018402383s STEP: Saw pod success Mar 12 23:39:25.466: INFO: Pod "downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc" satisfied condition "Succeeded or Failed" Mar 12 23:39:25.470: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc container client-container: STEP: delete the pod Mar 12 23:39:25.518: INFO: Waiting for pod downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc to disappear Mar 12 23:39:25.524: INFO: Pod downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:39:25.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7529" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":268,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:39:25.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3909.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3909.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 23:39:29.682: INFO: DNS probes using dns-3909/dns-test-8157071b-db80-47e6-855d-a59d7b741496 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:39:29.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3909" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":13,"skipped":292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:39:29.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 12 23:39:29.831: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:39:42.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9761" for this suite. • [SLOW TEST:12.731 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":336,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:39:42.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-16c4a896-3487-4d34-bd0a-ec010f9cfdcc STEP: Creating a pod to test consume secrets Mar 12 23:39:42.687: INFO: Waiting up to 5m0s for pod "pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62" in namespace "secrets-1252" to be "Succeeded or Failed" Mar 12 23:39:42.704: INFO: Pod "pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62": Phase="Pending", Reason="", readiness=false. Elapsed: 17.264086ms Mar 12 23:39:44.708: INFO: Pod "pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020505471s STEP: Saw pod success Mar 12 23:39:44.708: INFO: Pod "pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62" satisfied condition "Succeeded or Failed" Mar 12 23:39:44.711: INFO: Trying to get logs from node latest-worker pod pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62 container secret-volume-test: STEP: delete the pod Mar 12 23:39:44.731: INFO: Waiting for pod pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62 to disappear Mar 12 23:39:44.734: INFO: Pod pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:39:44.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1252" for this suite. STEP: Destroying namespace "secret-namespace-7453" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":345,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:39:44.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 12 23:39:44.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3536' Mar 12 23:39:44.906: INFO: stderr: "" Mar 12 23:39:44.906: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 12 23:39:49.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3536 -o json' Mar 12 23:39:50.068: INFO: stderr: "" Mar 12 23:39:50.068: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-12T23:39:44Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3536\",\n \"resourceVersion\": \"1208596\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3536/pods/e2e-test-httpd-pod\",\n \"uid\": \"f649445f-7c18-4a42-9005-cb84f721feb9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-q8m97\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-q8m97\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-q8m97\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T23:39:45Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T23:39:46Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T23:39:46Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T23:39:44Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f4ff5dddbc2ee45236064abe99e10f43f8a0620ce6bfb187cdc76ef5caf76f12\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-12T23:39:46Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.16\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.89\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.89\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-12T23:39:45Z\"\n }\n}\n" STEP: replace the image in the pod Mar 12 23:39:50.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3536' Mar 12 23:39:50.262: INFO: stderr: "" Mar 12 23:39:50.262: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 12 23:39:50.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3536' Mar 12 23:39:52.515: INFO: stderr: "" Mar 12 23:39:52.515: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:39:52.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3536" for this suite. • [SLOW TEST:7.776 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":16,"skipped":348,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:39:52.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0312 23:40:02.735251 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 23:40:02.735: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:40:02.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6699" for this suite. • [SLOW TEST:10.219 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":17,"skipped":350,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:40:02.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:40:02.788: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:40:04.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2703" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:40:04.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 12 23:40:06.955: INFO: &Pod{ObjectMeta:{send-events-bb7d3bd3-0f91-4d29-8fa4-6c244da5649a events-7021 /api/v1/namespaces/events-7021/pods/send-events-bb7d3bd3-0f91-4d29-8fa4-6c244da5649a 5fbaa80f-f8b9-4763-ba3c-4d3d8550f81b 1208870 0 2020-03-12 23:40:04 +0000 UTC map[name:foo time:929768449] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mvkfp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mvkfp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mvkfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:40:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:40:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:40:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:40:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.215,StartTime:2020-03-12 23:40:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 23:40:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://2cad2e1b0356d8c6d326e354ac6ba591f2f0d1f926e4d6d2ab0f744252ad0bba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 12 23:40:08.959: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 12 23:40:10.964: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:40:10.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7021" for this suite. • [SLOW TEST:6.145 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":19,"skipped":390,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:40:10.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-e93f121a-2930-405a-99f4-5562d5348646 in namespace container-probe-9789 Mar 12 23:40:13.068: INFO: Started pod test-webserver-e93f121a-2930-405a-99f4-5562d5348646 in namespace container-probe-9789 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 23:40:13.072: INFO: Initial restart count of pod test-webserver-e93f121a-2930-405a-99f4-5562d5348646 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:44:13.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9789" for this suite. • [SLOW TEST:242.622 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":425,"failed":0} SSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:44:13.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Mar 12 23:44:17.694: INFO: Pod pod-hostip-7fde0035-5166-4375-afa1-476e58e5e069 has hostIP: 172.17.0.16 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:44:17.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7649" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":428,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:44:17.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-e13c5653-5215-447f-a346-68f8fc7c51ce [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:44:17.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6794" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":22,"skipped":431,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:44:17.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 12 23:44:17.826: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636" in namespace "projected-5403" to be "Succeeded or Failed" Mar 12 23:44:17.831: INFO: Pod "downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636": Phase="Pending", Reason="", readiness=false. Elapsed: 5.121527ms Mar 12 23:44:19.835: INFO: Pod "downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00934816s STEP: Saw pod success Mar 12 23:44:19.835: INFO: Pod "downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636" satisfied condition "Succeeded or Failed" Mar 12 23:44:19.838: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636 container client-container: STEP: delete the pod Mar 12 23:44:19.912: INFO: Waiting for pod downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636 to disappear Mar 12 23:44:19.944: INFO: Pod downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:44:19.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5403" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":434,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:44:19.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-2737c147-b2bc-4973-be31-1d8d8751343f STEP: Creating a pod to test consume configMaps Mar 12 23:44:20.022: INFO: Waiting up to 5m0s for pod "pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef" in namespace "configmap-2647" to be "Succeeded or Failed" Mar 12 23:44:20.039: INFO: Pod "pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef": Phase="Pending", Reason="", readiness=false. Elapsed: 16.9418ms Mar 12 23:44:22.043: INFO: Pod "pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020582045s STEP: Saw pod success Mar 12 23:44:22.043: INFO: Pod "pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef" satisfied condition "Succeeded or Failed" Mar 12 23:44:22.046: INFO: Trying to get logs from node latest-worker pod pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef container configmap-volume-test: STEP: delete the pod Mar 12 23:44:22.089: INFO: Waiting for pod pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef to disappear Mar 12 23:44:22.098: INFO: Pod pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:44:22.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2647" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":435,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:44:22.125: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Mar 12 23:44:22.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-308 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 12 23:44:22.259: INFO: stderr: "" Mar 12 23:44:22.259: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Mar 12 23:44:22.259: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 12 23:44:22.259: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-308" to be "running and ready, or succeeded" Mar 12 23:44:22.301: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 41.708058ms Mar 12 23:44:24.304: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.044653364s Mar 12 23:44:24.304: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 12 23:44:24.304: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 12 23:44:24.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-308' Mar 12 23:44:24.407: INFO: stderr: "" Mar 12 23:44:24.407: INFO: stdout: "I0312 23:44:23.400167 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/vsg9 473\nI0312 23:44:23.600272 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/csj 393\nI0312 23:44:23.800336 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/2zzs 252\nI0312 23:44:24.000350 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/p6p 559\nI0312 23:44:24.200411 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/n56 415\nI0312 23:44:24.400329 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/rwt 542\n" STEP: limiting log lines Mar 12 23:44:24.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-308 --tail=1' Mar 12 23:44:24.497: INFO: stderr: "" Mar 12 23:44:24.497: INFO: stdout: "I0312 23:44:24.400329 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/rwt 542\n" Mar 12 23:44:24.497: INFO: got output "I0312 23:44:24.400329 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/rwt 542\n" STEP: limiting log bytes Mar 12 23:44:24.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-308 --limit-bytes=1' Mar 12 23:44:24.577: INFO: stderr: "" Mar 12 23:44:24.577: INFO: stdout: "I" Mar 12 23:44:24.577: INFO: got output "I" STEP: exposing timestamps Mar 12 23:44:24.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-308 --tail=1 --timestamps' Mar 12 23:44:24.649: INFO: stderr: "" Mar 12 23:44:24.649: INFO: stdout: "2020-03-12T23:44:24.600415513Z I0312 23:44:24.600308 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/rm5 505\n" Mar 12 23:44:24.649: INFO: got output "2020-03-12T23:44:24.600415513Z I0312 23:44:24.600308 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/rm5 505\n" STEP: restricting to a time range Mar 12 23:44:27.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-308 --since=1s' Mar 12 23:44:27.283: INFO: stderr: "" Mar 12 23:44:27.283: INFO: stdout: "I0312 23:44:26.400323 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/gztv 557\nI0312 23:44:26.600367 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/5j5 331\nI0312 23:44:26.800362 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/rtq 594\nI0312 23:44:27.000384 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/4cbt 248\nI0312 23:44:27.200415 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/slm 567\n" Mar 12 23:44:27.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-308 --since=24h' Mar 12 23:44:27.377: INFO: stderr: "" Mar 12 23:44:27.377: INFO: stdout: "I0312 23:44:23.400167 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/vsg9 473\nI0312 23:44:23.600272 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/csj 393\nI0312 23:44:23.800336 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/2zzs 252\nI0312 23:44:24.000350 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/p6p 559\nI0312 23:44:24.200411 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/n56 415\nI0312 23:44:24.400329 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/rwt 542\nI0312 23:44:24.600308 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/rm5 505\nI0312 23:44:24.800319 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/vbzm 208\nI0312 23:44:25.000304 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/wf84 390\nI0312 23:44:25.200329 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/l5vw 540\nI0312 23:44:25.400353 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/cj7 477\nI0312 23:44:25.600353 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/pjpg 228\nI0312 23:44:25.800338 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/rq4 368\nI0312 23:44:26.000372 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/rdz 528\nI0312 23:44:26.200361 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/jbcg 450\nI0312 23:44:26.400323 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/gztv 557\nI0312 23:44:26.600367 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/5j5 331\nI0312 23:44:26.800362 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/rtq 594\nI0312 23:44:27.000384 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/4cbt 248\nI0312 23:44:27.200415 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/slm 567\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Mar 12 23:44:27.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-308' Mar 12 23:44:28.962: INFO: stderr: "" Mar 12 23:44:28.962: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:44:28.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-308" for this suite. • [SLOW TEST:6.844 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":25,"skipped":441,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:44:28.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 12 23:44:29.011: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Mar 12 23:44:29.795: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 12 23:44:31.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653469, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653469, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653469, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653469, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 12 23:44:34.482: INFO: Waited 605.317964ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:44:34.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2747" for this suite. • [SLOW TEST:6.064 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":26,"skipped":455,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:44:35.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9792 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-9792 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9792 Mar 12 23:44:35.086: INFO: Found 0 stateful pods, waiting for 1 Mar 12 23:44:45.106: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 12 23:44:45.110: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 23:44:45.357: INFO: stderr: "I0312 23:44:45.257790 521 log.go:172] (0xc00003a6e0) (0xc0006615e0) Create stream\nI0312 23:44:45.257857 521 log.go:172] (0xc00003a6e0) (0xc0006615e0) Stream added, broadcasting: 1\nI0312 23:44:45.260615 521 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0312 23:44:45.260642 521 log.go:172] (0xc00003a6e0) (0xc0005b5680) Create stream\nI0312 23:44:45.260649 521 log.go:172] (0xc00003a6e0) (0xc0005b5680) Stream added, broadcasting: 3\nI0312 23:44:45.261514 521 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0312 23:44:45.261552 521 log.go:172] (0xc00003a6e0) (0xc000661680) Create stream\nI0312 23:44:45.261564 521 log.go:172] (0xc00003a6e0) (0xc000661680) Stream added, broadcasting: 5\nI0312 23:44:45.262536 521 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0312 23:44:45.327177 521 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0312 23:44:45.327196 521 log.go:172] (0xc000661680) (5) Data frame handling\nI0312 23:44:45.327207 521 log.go:172] (0xc000661680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:44:45.351939 521 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0312 23:44:45.351976 521 log.go:172] (0xc000661680) (5) Data frame handling\nI0312 23:44:45.352002 521 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0312 23:44:45.352047 521 log.go:172] (0xc0005b5680) (3) Data frame handling\nI0312 23:44:45.352065 521 log.go:172] (0xc0005b5680) (3) Data frame sent\nI0312 23:44:45.352072 521 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0312 23:44:45.352078 521 log.go:172] (0xc0005b5680) (3) Data frame handling\nI0312 23:44:45.353741 521 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0312 23:44:45.353765 521 log.go:172] (0xc0006615e0) (1) Data frame handling\nI0312 23:44:45.353788 521 log.go:172] (0xc0006615e0) (1) Data frame sent\nI0312 23:44:45.353806 521 log.go:172] (0xc00003a6e0) (0xc0006615e0) Stream removed, broadcasting: 1\nI0312 23:44:45.353830 521 log.go:172] (0xc00003a6e0) Go away received\nI0312 23:44:45.354266 521 log.go:172] (0xc00003a6e0) (0xc0006615e0) Stream removed, broadcasting: 1\nI0312 23:44:45.354286 521 log.go:172] (0xc00003a6e0) (0xc0005b5680) Stream removed, broadcasting: 3\nI0312 23:44:45.354296 521 log.go:172] (0xc00003a6e0) (0xc000661680) Stream removed, broadcasting: 5\n" Mar 12 23:44:45.357: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 23:44:45.357: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 23:44:45.360: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 12 23:44:55.364: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 23:44:55.364: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 23:44:55.384: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 23:44:55.384: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }] Mar 12 23:44:55.384: INFO: Mar 12 23:44:55.384: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 12 23:44:56.388: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987669235s Mar 12 23:44:57.392: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984261796s Mar 12 23:44:58.395: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98024802s Mar 12 23:44:59.399: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976693445s Mar 12 23:45:00.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972749706s Mar 12 23:45:01.407: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.969188675s Mar 12 23:45:02.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964972997s Mar 12 23:45:03.422: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.954289541s Mar 12 23:45:04.426: INFO: Verifying statefulset ss doesn't scale past 3 for another 950.292309ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9792 Mar 12 23:45:05.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 23:45:05.629: INFO: stderr: "I0312 23:45:05.563865 543 log.go:172] (0xc0003c7d90) (0xc00069d680) Create stream\nI0312 23:45:05.563911 543 log.go:172] (0xc0003c7d90) (0xc00069d680) Stream added, broadcasting: 1\nI0312 23:45:05.566172 543 log.go:172] (0xc0003c7d90) Reply frame received for 1\nI0312 23:45:05.566205 543 log.go:172] (0xc0003c7d90) (0xc0003d0b40) Create stream\nI0312 23:45:05.566214 543 log.go:172] (0xc0003c7d90) (0xc0003d0b40) Stream added, broadcasting: 3\nI0312 23:45:05.567045 543 log.go:172] (0xc0003c7d90) Reply frame received for 3\nI0312 23:45:05.567078 543 log.go:172] (0xc0003c7d90) (0xc00069d720) Create stream\nI0312 23:45:05.567088 543 log.go:172] (0xc0003c7d90) (0xc00069d720) Stream added, broadcasting: 5\nI0312 23:45:05.567976 543 log.go:172] (0xc0003c7d90) Reply frame received for 5\nI0312 23:45:05.624843 543 log.go:172] (0xc0003c7d90) Data frame received for 3\nI0312 23:45:05.624885 543 log.go:172] (0xc0003d0b40) (3) Data frame handling\nI0312 23:45:05.624899 543 log.go:172] (0xc0003d0b40) (3) Data frame sent\nI0312 23:45:05.624909 543 log.go:172] (0xc0003c7d90) Data frame received for 3\nI0312 23:45:05.624918 543 log.go:172] (0xc0003d0b40) (3) Data frame handling\nI0312 23:45:05.624949 543 log.go:172] (0xc0003c7d90) Data frame received for 5\nI0312 23:45:05.624957 543 log.go:172] (0xc00069d720) (5) Data frame handling\nI0312 23:45:05.624976 543 log.go:172] (0xc00069d720) (5) Data frame sent\nI0312 23:45:05.624986 543 log.go:172] (0xc0003c7d90) Data frame received for 5\nI0312 23:45:05.625003 543 log.go:172] (0xc00069d720) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 23:45:05.626050 543 log.go:172] (0xc0003c7d90) Data frame received for 1\nI0312 23:45:05.626068 543 log.go:172] (0xc00069d680) (1) Data frame handling\nI0312 23:45:05.626077 543 log.go:172] (0xc00069d680) (1) Data frame sent\nI0312 23:45:05.626088 543 log.go:172] (0xc0003c7d90) (0xc00069d680) Stream removed, broadcasting: 1\nI0312 23:45:05.626384 543 log.go:172] (0xc0003c7d90) (0xc00069d680) Stream removed, broadcasting: 1\nI0312 23:45:05.626400 543 log.go:172] (0xc0003c7d90) (0xc0003d0b40) Stream removed, broadcasting: 3\nI0312 23:45:05.626406 543 log.go:172] (0xc0003c7d90) (0xc00069d720) Stream removed, broadcasting: 5\n" Mar 12 23:45:05.629: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 23:45:05.629: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 23:45:05.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 23:45:05.821: INFO: stderr: "I0312 23:45:05.748542 564 log.go:172] (0xc0008366e0) (0xc0007ee000) Create stream\nI0312 23:45:05.748582 564 log.go:172] (0xc0008366e0) (0xc0007ee000) Stream added, broadcasting: 1\nI0312 23:45:05.750786 564 log.go:172] (0xc0008366e0) Reply frame received for 1\nI0312 23:45:05.750812 564 log.go:172] (0xc0008366e0) (0xc0006e1360) Create stream\nI0312 23:45:05.750821 564 log.go:172] (0xc0008366e0) (0xc0006e1360) Stream added, broadcasting: 3\nI0312 23:45:05.751646 564 log.go:172] (0xc0008366e0) Reply frame received for 3\nI0312 23:45:05.751689 564 log.go:172] (0xc0008366e0) (0xc0006e1540) Create stream\nI0312 23:45:05.751696 564 log.go:172] (0xc0008366e0) (0xc0006e1540) Stream added, broadcasting: 5\nI0312 23:45:05.752416 564 log.go:172] (0xc0008366e0) Reply frame received for 5\nI0312 23:45:05.816285 564 log.go:172] (0xc0008366e0) Data frame received for 5\nI0312 23:45:05.816318 564 log.go:172] (0xc0006e1540) (5) Data frame handling\nI0312 23:45:05.816331 564 log.go:172] (0xc0006e1540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0312 23:45:05.816348 564 log.go:172] (0xc0008366e0) Data frame received for 3\nI0312 23:45:05.816356 564 log.go:172] (0xc0006e1360) (3) Data frame handling\nI0312 23:45:05.816364 564 log.go:172] (0xc0006e1360) (3) Data frame sent\nI0312 23:45:05.816381 564 log.go:172] (0xc0008366e0) Data frame received for 3\nI0312 23:45:05.816388 564 log.go:172] (0xc0006e1360) (3) Data frame handling\nI0312 23:45:05.816449 564 log.go:172] (0xc0008366e0) Data frame received for 5\nI0312 23:45:05.816462 564 log.go:172] (0xc0006e1540) (5) Data frame handling\nI0312 23:45:05.817698 564 log.go:172] (0xc0008366e0) Data frame received for 1\nI0312 23:45:05.817718 564 log.go:172] (0xc0007ee000) (1) Data frame handling\nI0312 23:45:05.817727 564 log.go:172] (0xc0007ee000) (1) Data frame sent\nI0312 23:45:05.817736 564 log.go:172] (0xc0008366e0) (0xc0007ee000) Stream removed, broadcasting: 1\nI0312 23:45:05.817888 564 log.go:172] (0xc0008366e0) Go away received\nI0312 23:45:05.818006 564 log.go:172] (0xc0008366e0) (0xc0007ee000) Stream removed, broadcasting: 1\nI0312 23:45:05.818022 564 log.go:172] (0xc0008366e0) (0xc0006e1360) Stream removed, broadcasting: 3\nI0312 23:45:05.818028 564 log.go:172] (0xc0008366e0) (0xc0006e1540) Stream removed, broadcasting: 5\n" Mar 12 23:45:05.821: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 23:45:05.821: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 23:45:05.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 23:45:05.971: INFO: stderr: "I0312 23:45:05.919982 584 log.go:172] (0xc000ba9290) (0xc0008f4a00) Create stream\nI0312 23:45:05.920014 584 log.go:172] (0xc000ba9290) (0xc0008f4a00) Stream added, broadcasting: 1\nI0312 23:45:05.922742 584 log.go:172] (0xc000ba9290) Reply frame received for 1\nI0312 23:45:05.922767 584 log.go:172] (0xc000ba9290) (0xc0007dd720) Create stream\nI0312 23:45:05.922773 584 log.go:172] (0xc000ba9290) (0xc0007dd720) Stream added, broadcasting: 3\nI0312 23:45:05.923345 584 log.go:172] (0xc000ba9290) Reply frame received for 3\nI0312 23:45:05.923365 584 log.go:172] (0xc000ba9290) (0xc000546b40) Create stream\nI0312 23:45:05.923371 584 log.go:172] (0xc000ba9290) (0xc000546b40) Stream added, broadcasting: 5\nI0312 23:45:05.923930 584 log.go:172] (0xc000ba9290) Reply frame received for 5\nI0312 23:45:05.967221 584 log.go:172] (0xc000ba9290) Data frame received for 3\nI0312 23:45:05.967256 584 log.go:172] (0xc0007dd720) (3) Data frame handling\nI0312 23:45:05.967265 584 log.go:172] (0xc0007dd720) (3) Data frame sent\nI0312 23:45:05.967271 584 log.go:172] (0xc000ba9290) Data frame received for 3\nI0312 23:45:05.967278 584 log.go:172] (0xc0007dd720) (3) Data frame handling\nI0312 23:45:05.967287 584 log.go:172] (0xc000ba9290) Data frame received for 5\nI0312 23:45:05.967293 584 log.go:172] (0xc000546b40) (5) Data frame handling\nI0312 23:45:05.967300 584 log.go:172] (0xc000546b40) (5) Data frame sent\nI0312 23:45:05.967306 584 log.go:172] (0xc000ba9290) Data frame received for 5\nI0312 23:45:05.967312 584 log.go:172] (0xc000546b40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0312 23:45:05.968565 584 log.go:172] (0xc000ba9290) Data frame received for 1\nI0312 23:45:05.968595 584 log.go:172] (0xc0008f4a00) (1) Data frame handling\nI0312 23:45:05.968605 584 log.go:172] (0xc0008f4a00) (1) Data frame sent\nI0312 23:45:05.968618 584 log.go:172] (0xc000ba9290) (0xc0008f4a00) Stream removed, broadcasting: 1\nI0312 23:45:05.968656 584 log.go:172] (0xc000ba9290) Go away received\nI0312 23:45:05.968851 584 log.go:172] (0xc000ba9290) (0xc0008f4a00) Stream removed, broadcasting: 1\nI0312 23:45:05.968863 584 log.go:172] (0xc000ba9290) (0xc0007dd720) Stream removed, broadcasting: 3\nI0312 23:45:05.968870 584 log.go:172] (0xc000ba9290) (0xc000546b40) Stream removed, broadcasting: 5\n" Mar 12 23:45:05.971: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 23:45:05.971: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 23:45:05.975: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 23:45:05.975: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 23:45:05.975: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 12 23:45:05.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 23:45:06.144: INFO: stderr: "I0312 23:45:06.065303 606 log.go:172] (0xc0009f08f0) (0xc000a241e0) Create stream\nI0312 23:45:06.065336 606 log.go:172] (0xc0009f08f0) (0xc000a241e0) Stream added, broadcasting: 1\nI0312 23:45:06.067076 606 log.go:172] (0xc0009f08f0) Reply frame received for 1\nI0312 23:45:06.067105 606 log.go:172] (0xc0009f08f0) (0xc00080f220) Create stream\nI0312 23:45:06.067120 606 log.go:172] (0xc0009f08f0) (0xc00080f220) Stream added, broadcasting: 3\nI0312 23:45:06.067591 606 log.go:172] (0xc0009f08f0) Reply frame received for 3\nI0312 23:45:06.067607 606 log.go:172] (0xc0009f08f0) (0xc00080f400) Create stream\nI0312 23:45:06.067612 606 log.go:172] (0xc0009f08f0) (0xc00080f400) Stream added, broadcasting: 5\nI0312 23:45:06.068073 606 log.go:172] (0xc0009f08f0) Reply frame received for 5\nI0312 23:45:06.140140 606 log.go:172] (0xc0009f08f0) Data frame received for 3\nI0312 23:45:06.140160 606 log.go:172] (0xc00080f220) (3) Data frame handling\nI0312 23:45:06.140166 606 log.go:172] (0xc00080f220) (3) Data frame sent\nI0312 23:45:06.140171 606 log.go:172] (0xc0009f08f0) Data frame received for 3\nI0312 23:45:06.140174 606 log.go:172] (0xc00080f220) (3) Data frame handling\nI0312 23:45:06.140192 606 log.go:172] (0xc0009f08f0) Data frame received for 5\nI0312 23:45:06.140199 606 log.go:172] (0xc00080f400) (5) Data frame handling\nI0312 23:45:06.140206 606 log.go:172] (0xc00080f400) (5) Data frame sent\nI0312 23:45:06.140212 606 log.go:172] (0xc0009f08f0) Data frame received for 5\nI0312 23:45:06.140215 606 log.go:172] (0xc00080f400) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:45:06.140857 606 log.go:172] (0xc0009f08f0) Data frame received for 1\nI0312 23:45:06.140875 606 log.go:172] (0xc000a241e0) (1) Data frame handling\nI0312 23:45:06.140883 606 log.go:172] (0xc000a241e0) (1) Data frame sent\nI0312 23:45:06.140897 606 log.go:172] (0xc0009f08f0) (0xc000a241e0) Stream removed, broadcasting: 1\nI0312 23:45:06.140908 606 log.go:172] (0xc0009f08f0) Go away received\nI0312 23:45:06.141161 606 log.go:172] (0xc0009f08f0) (0xc000a241e0) Stream removed, broadcasting: 1\nI0312 23:45:06.141174 606 log.go:172] (0xc0009f08f0) (0xc00080f220) Stream removed, broadcasting: 3\nI0312 23:45:06.141179 606 log.go:172] (0xc0009f08f0) (0xc00080f400) Stream removed, broadcasting: 5\n" Mar 12 23:45:06.144: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 23:45:06.144: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 23:45:06.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 23:45:06.330: INFO: stderr: "I0312 23:45:06.234296 626 log.go:172] (0xc0009d62c0) (0xc000a5e640) Create stream\nI0312 23:45:06.234329 626 log.go:172] (0xc0009d62c0) (0xc000a5e640) Stream added, broadcasting: 1\nI0312 23:45:06.236964 626 log.go:172] (0xc0009d62c0) Reply frame received for 1\nI0312 23:45:06.236986 626 log.go:172] (0xc0009d62c0) (0xc000209540) Create stream\nI0312 23:45:06.236993 626 log.go:172] (0xc0009d62c0) (0xc000209540) Stream added, broadcasting: 3\nI0312 23:45:06.237584 626 log.go:172] (0xc0009d62c0) Reply frame received for 3\nI0312 23:45:06.237601 626 log.go:172] (0xc0009d62c0) (0xc0006843c0) Create stream\nI0312 23:45:06.237608 626 log.go:172] (0xc0009d62c0) (0xc0006843c0) Stream added, broadcasting: 5\nI0312 23:45:06.238151 626 log.go:172] (0xc0009d62c0) Reply frame received for 5\nI0312 23:45:06.306304 626 log.go:172] (0xc0009d62c0) Data frame received for 5\nI0312 23:45:06.306323 626 log.go:172] (0xc0006843c0) (5) Data frame handling\nI0312 23:45:06.306337 626 log.go:172] (0xc0006843c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:45:06.326735 626 log.go:172] (0xc0009d62c0) Data frame received for 3\nI0312 23:45:06.326753 626 log.go:172] (0xc000209540) (3) Data frame handling\nI0312 23:45:06.326766 626 log.go:172] (0xc000209540) (3) Data frame sent\nI0312 23:45:06.326772 626 log.go:172] (0xc0009d62c0) Data frame received for 3\nI0312 23:45:06.326785 626 log.go:172] (0xc000209540) (3) Data frame handling\nI0312 23:45:06.326840 626 log.go:172] (0xc0009d62c0) Data frame received for 5\nI0312 23:45:06.326852 626 log.go:172] (0xc0006843c0) (5) Data frame handling\nI0312 23:45:06.327697 626 log.go:172] (0xc0009d62c0) Data frame received for 1\nI0312 23:45:06.327713 626 log.go:172] (0xc000a5e640) (1) Data frame handling\nI0312 23:45:06.327723 626 log.go:172] (0xc000a5e640) (1) Data frame sent\nI0312 23:45:06.327756 626 log.go:172] (0xc0009d62c0) (0xc000a5e640) Stream removed, broadcasting: 1\nI0312 23:45:06.327788 626 log.go:172] (0xc0009d62c0) Go away received\nI0312 23:45:06.327975 626 log.go:172] (0xc0009d62c0) (0xc000a5e640) Stream removed, broadcasting: 1\nI0312 23:45:06.327986 626 log.go:172] (0xc0009d62c0) (0xc000209540) Stream removed, broadcasting: 3\nI0312 23:45:06.327993 626 log.go:172] (0xc0009d62c0) (0xc0006843c0) Stream removed, broadcasting: 5\n" Mar 12 23:45:06.330: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 23:45:06.330: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 23:45:06.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 23:45:06.529: INFO: stderr: "I0312 23:45:06.415303 646 log.go:172] (0xc0006022c0) (0xc0006f1540) Create stream\nI0312 23:45:06.415337 646 log.go:172] (0xc0006022c0) (0xc0006f1540) Stream added, broadcasting: 1\nI0312 23:45:06.417171 646 log.go:172] (0xc0006022c0) Reply frame received for 1\nI0312 23:45:06.417189 646 log.go:172] (0xc0006022c0) (0xc0004ccb40) Create stream\nI0312 23:45:06.417195 646 log.go:172] (0xc0006022c0) (0xc0004ccb40) Stream added, broadcasting: 3\nI0312 23:45:06.418287 646 log.go:172] (0xc0006022c0) Reply frame received for 3\nI0312 23:45:06.418311 646 log.go:172] (0xc0006022c0) (0xc0008e6000) Create stream\nI0312 23:45:06.418321 646 log.go:172] (0xc0006022c0) (0xc0008e6000) Stream added, broadcasting: 5\nI0312 23:45:06.418813 646 log.go:172] (0xc0006022c0) Reply frame received for 5\nI0312 23:45:06.480410 646 log.go:172] (0xc0006022c0) Data frame received for 5\nI0312 23:45:06.480431 646 log.go:172] (0xc0008e6000) (5) Data frame handling\nI0312 23:45:06.480438 646 log.go:172] (0xc0008e6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:45:06.524550 646 log.go:172] (0xc0006022c0) Data frame received for 3\nI0312 23:45:06.524578 646 log.go:172] (0xc0004ccb40) (3) Data frame handling\nI0312 23:45:06.524588 646 log.go:172] (0xc0004ccb40) (3) Data frame sent\nI0312 23:45:06.524816 646 log.go:172] (0xc0006022c0) Data frame received for 5\nI0312 23:45:06.524837 646 log.go:172] (0xc0008e6000) (5) Data frame handling\nI0312 23:45:06.524851 646 log.go:172] (0xc0006022c0) Data frame received for 3\nI0312 23:45:06.524859 646 log.go:172] (0xc0004ccb40) (3) Data frame handling\nI0312 23:45:06.525965 646 log.go:172] (0xc0006022c0) Data frame received for 1\nI0312 23:45:06.525979 646 log.go:172] (0xc0006f1540) (1) Data frame handling\nI0312 23:45:06.525988 646 log.go:172] (0xc0006f1540) (1) Data frame sent\nI0312 23:45:06.526000 646 log.go:172] (0xc0006022c0) (0xc0006f1540) Stream removed, broadcasting: 1\nI0312 23:45:06.526013 646 log.go:172] (0xc0006022c0) Go away received\nI0312 23:45:06.526309 646 log.go:172] (0xc0006022c0) (0xc0006f1540) Stream removed, broadcasting: 1\nI0312 23:45:06.526323 646 log.go:172] (0xc0006022c0) (0xc0004ccb40) Stream removed, broadcasting: 3\nI0312 23:45:06.526330 646 log.go:172] (0xc0006022c0) (0xc0008e6000) Stream removed, broadcasting: 5\n" Mar 12 23:45:06.529: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 23:45:06.529: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 23:45:06.529: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 23:45:06.532: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 12 23:45:16.546: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 23:45:16.546: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 12 23:45:16.546: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 12 23:45:16.556: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 23:45:16.556: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }] Mar 12 23:45:16.556: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }] Mar 12 23:45:16.556: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }] Mar 12 23:45:16.556: INFO: Mar 12 23:45:16.556: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 23:45:17.559: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 23:45:17.559: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }] Mar 12 23:45:17.560: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }] Mar 12 23:45:17.560: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }] Mar 12 23:45:17.560: INFO: Mar 12 23:45:17.560: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 12 23:45:18.563: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 23:45:18.563: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }] Mar 12 23:45:18.563: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }] Mar 12 23:45:18.563: INFO: Mar 12 23:45:18.563: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 12 23:45:19.567: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 23:45:19.567: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }] Mar 12 23:45:19.567: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }] Mar 12 23:45:19.567: INFO: Mar 12 23:45:19.567: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 12 23:45:20.571: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 23:45:20.571: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }] Mar 12 23:45:20.571: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }] Mar 12 23:45:20.571: INFO: Mar 12 23:45:20.571: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 12 23:45:21.575: INFO: POD NODE PHASE GRACE CONDITIONS Mar 12 23:45:21.575: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }] Mar 12 23:45:21.575: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }] Mar 12 23:45:21.575: INFO: Mar 12 23:45:21.575: INFO: StatefulSet ss has not reached scale 0, at 2 Mar 12 23:45:22.585: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.974627415s Mar 12 23:45:23.589: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.964645872s Mar 12 23:45:24.592: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.961183358s Mar 12 23:45:25.597: INFO: Verifying statefulset ss doesn't scale past 0 for another 957.844643ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9792 Mar 12 23:45:26.601: INFO: Scaling statefulset ss to 0 Mar 12 23:45:26.609: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 12 23:45:26.612: INFO: Deleting all statefulset in ns statefulset-9792 Mar 12 23:45:26.614: INFO: Scaling statefulset ss to 0 Mar 12 23:45:26.623: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 23:45:26.625: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:45:26.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9792" for this suite. • [SLOW TEST:51.628 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":27,"skipped":473,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:45:26.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Mar 12 23:45:26.714: INFO: Waiting up to 5m0s for pod "client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada" in namespace "containers-5468" to be "Succeeded or Failed" Mar 12 23:45:26.731: INFO: Pod "client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada": Phase="Pending", Reason="", readiness=false. Elapsed: 17.150895ms Mar 12 23:45:28.734: INFO: Pod "client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020624799s STEP: Saw pod success Mar 12 23:45:28.735: INFO: Pod "client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada" satisfied condition "Succeeded or Failed" Mar 12 23:45:28.737: INFO: Trying to get logs from node latest-worker pod client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada container test-container: STEP: delete the pod Mar 12 23:45:28.785: INFO: Waiting for pod client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada to disappear Mar 12 23:45:28.788: INFO: Pod client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:45:28.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5468" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":475,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:45:28.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-nmxm STEP: Creating a pod to test atomic-volume-subpath Mar 12 23:45:28.868: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-nmxm" in namespace "subpath-311" to be "Succeeded or Failed" Mar 12 23:45:28.872: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150406ms Mar 12 23:45:30.876: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 2.007921197s Mar 12 23:45:32.880: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 4.012124605s Mar 12 23:45:34.884: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 6.015578449s Mar 12 23:45:36.888: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 8.019354308s Mar 12 23:45:38.892: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 10.023209612s Mar 12 23:45:40.896: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 12.027542604s Mar 12 23:45:42.899: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 14.030804654s Mar 12 23:45:44.907: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 16.038412205s Mar 12 23:45:46.911: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 18.042474642s Mar 12 23:45:48.915: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 20.046266085s Mar 12 23:45:50.918: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 22.049779359s Mar 12 23:45:52.922: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053728026s STEP: Saw pod success Mar 12 23:45:52.922: INFO: Pod "pod-subpath-test-downwardapi-nmxm" satisfied condition "Succeeded or Failed" Mar 12 23:45:52.926: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-nmxm container test-container-subpath-downwardapi-nmxm: STEP: delete the pod Mar 12 23:45:52.959: INFO: Waiting for pod pod-subpath-test-downwardapi-nmxm to disappear Mar 12 23:45:52.983: INFO: Pod pod-subpath-test-downwardapi-nmxm no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-nmxm Mar 12 23:45:52.983: INFO: Deleting pod "pod-subpath-test-downwardapi-nmxm" in namespace "subpath-311" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:45:52.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-311" for this suite. • [SLOW TEST:24.192 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":29,"skipped":483,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:45:52.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 23:45:53.749: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 23:45:55.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653553, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653553, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653553, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653553, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 23:45:58.771: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:45:58.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6585" for this suite. STEP: Destroying namespace "webhook-6585-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.047 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":30,"skipped":494,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:45:59.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:45:59.123: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 12 23:46:00.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5637 create -f -' Mar 12 23:46:02.865: INFO: stderr: "" Mar 12 23:46:02.865: INFO: stdout: "e2e-test-crd-publish-openapi-6552-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 12 23:46:02.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5637 delete e2e-test-crd-publish-openapi-6552-crds test-cr' Mar 12 23:46:02.971: INFO: stderr: "" Mar 12 23:46:02.971: INFO: stdout: "e2e-test-crd-publish-openapi-6552-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 12 23:46:02.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5637 apply -f -' Mar 12 23:46:03.189: INFO: stderr: "" Mar 12 23:46:03.189: INFO: stdout: "e2e-test-crd-publish-openapi-6552-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 12 23:46:03.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5637 delete e2e-test-crd-publish-openapi-6552-crds test-cr' Mar 12 23:46:03.281: INFO: stderr: "" Mar 12 23:46:03.281: INFO: stdout: "e2e-test-crd-publish-openapi-6552-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 12 23:46:03.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6552-crds' Mar 12 23:46:03.510: INFO: stderr: "" Mar 12 23:46:03.510: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6552-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:46:06.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5637" for this suite. • [SLOW TEST:7.264 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":31,"skipped":506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:46:06.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:46:08.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4197" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":538,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:46:08.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 12 23:46:08.461: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:46:12.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9699" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":33,"skipped":541,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:46:12.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-7095/secret-test-ad93981a-cc8b-49c4-96e4-ef69bf8d2594 STEP: Creating a pod to test consume secrets Mar 12 23:46:12.248: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06" in namespace "secrets-7095" to be "Succeeded or Failed" Mar 12 23:46:12.253: INFO: Pod "pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417361ms Mar 12 23:46:14.256: INFO: Pod "pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007639792s STEP: Saw pod success Mar 12 23:46:14.256: INFO: Pod "pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06" satisfied condition "Succeeded or Failed" Mar 12 23:46:14.258: INFO: Trying to get logs from node latest-worker pod pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06 container env-test: STEP: delete the pod Mar 12 23:46:14.278: INFO: Waiting for pod pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06 to disappear Mar 12 23:46:14.281: INFO: Pod pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:46:14.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7095" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":547,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:46:14.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-a403256f-ce82-4022-b889-8dbda7444541 STEP: Creating a pod to test consume secrets Mar 12 23:46:14.342: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4" in namespace "projected-8064" to be "Succeeded or Failed" Mar 12 23:46:14.370: INFO: Pod "pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.614687ms Mar 12 23:46:16.373: INFO: Pod "pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.031411378s STEP: Saw pod success Mar 12 23:46:16.373: INFO: Pod "pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4" satisfied condition "Succeeded or Failed" Mar 12 23:46:16.375: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4 container projected-secret-volume-test: STEP: delete the pod Mar 12 23:46:16.395: INFO: Waiting for pod pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4 to disappear Mar 12 23:46:16.400: INFO: Pod pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:46:16.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8064" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":561,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:46:16.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:46:27.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1020" for this suite. • [SLOW TEST:11.127 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":36,"skipped":564,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:46:27.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Mar 12 23:46:27.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config api-versions' Mar 12 23:46:27.795: INFO: stderr: "" Mar 12 23:46:27.795: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:46:27.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4656" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":37,"skipped":565,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:46:27.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:46:27.847: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:46:28.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8388" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":38,"skipped":577,"failed":0} SSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:46:28.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Mar 12 23:46:28.507: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Mar 12 23:46:28.516: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 12 23:46:28.516: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Mar 12 23:46:28.522: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 12 23:46:28.522: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Mar 12 23:46:28.549: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Mar 12 23:46:28.549: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Mar 12 23:46:35.610: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:46:35.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-6166" for this suite. • [SLOW TEST:7.244 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":39,"skipped":582,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:46:35.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-7809/configmap-test-5430e95e-f61e-481b-ba57-cfae8baec8b1 STEP: Creating a pod to test consume configMaps Mar 12 23:46:35.750: INFO: Waiting up to 5m0s for pod "pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a" in namespace "configmap-7809" to be "Succeeded or Failed" Mar 12 23:46:35.754: INFO: Pod "pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178435ms Mar 12 23:46:37.757: INFO: Pod "pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006723464s STEP: Saw pod success Mar 12 23:46:37.757: INFO: Pod "pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a" satisfied condition "Succeeded or Failed" Mar 12 23:46:37.758: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a container env-test: STEP: delete the pod Mar 12 23:46:37.793: INFO: Waiting for pod pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a to disappear Mar 12 23:46:37.815: INFO: Pod pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:46:37.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7809" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":595,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:46:37.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:46:50.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8054" for this suite. • [SLOW TEST:13.128 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":41,"skipped":612,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:46:50.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2297 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-2297 Mar 12 23:46:51.048: INFO: Found 0 stateful pods, waiting for 1 Mar 12 23:47:01.052: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 12 23:47:01.072: INFO: Deleting all statefulset in ns statefulset-2297 Mar 12 23:47:01.079: INFO: Scaling statefulset ss to 0 Mar 12 23:47:21.125: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 23:47:21.127: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:47:21.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2297" for this suite. • [SLOW TEST:30.201 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":42,"skipped":633,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:47:21.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:47:21.285: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"96e77570-d7ef-4309-8184-abd17336785c", Controller:(*bool)(0xc0039f373a), BlockOwnerDeletion:(*bool)(0xc0039f373b)}} Mar 12 23:47:21.328: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"dd8da7c1-7baa-4a13-8b57-914ffbf5dedf", Controller:(*bool)(0xc0017c36ba), BlockOwnerDeletion:(*bool)(0xc0017c36bb)}} Mar 12 23:47:21.333: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7e08c4a3-84e8-4a98-91af-f2d9d3c90486", Controller:(*bool)(0xc0033d1f42), BlockOwnerDeletion:(*bool)(0xc0033d1f43)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:47:26.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6327" for this suite. • [SLOW TEST:5.239 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":43,"skipped":667,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:47:26.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 12 23:47:26.476: INFO: Waiting up to 5m0s for pod "pod-ba90e0cc-470f-441c-9336-22053a3e2426" in namespace "emptydir-9102" to be "Succeeded or Failed" Mar 12 23:47:26.521: INFO: Pod "pod-ba90e0cc-470f-441c-9336-22053a3e2426": Phase="Pending", Reason="", readiness=false. Elapsed: 45.478819ms Mar 12 23:47:28.525: INFO: Pod "pod-ba90e0cc-470f-441c-9336-22053a3e2426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.049022277s STEP: Saw pod success Mar 12 23:47:28.525: INFO: Pod "pod-ba90e0cc-470f-441c-9336-22053a3e2426" satisfied condition "Succeeded or Failed" Mar 12 23:47:28.527: INFO: Trying to get logs from node latest-worker pod pod-ba90e0cc-470f-441c-9336-22053a3e2426 container test-container: STEP: delete the pod Mar 12 23:47:28.565: INFO: Waiting for pod pod-ba90e0cc-470f-441c-9336-22053a3e2426 to disappear Mar 12 23:47:28.569: INFO: Pod pod-ba90e0cc-470f-441c-9336-22053a3e2426 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:47:28.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9102" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":688,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:47:28.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9489.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9489.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9489.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9489.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9489.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9489.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 12 23:47:32.694: INFO: DNS probes using dns-9489/dns-test-64c5b888-395a-40cd-93ed-23882267c5dd succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:47:32.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9489" for this suite. •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":45,"skipped":730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:47:32.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 23:47:33.381: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 23:47:35.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653653, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653653, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653653, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653653, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 23:47:38.410: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:47:38.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4883" for this suite. STEP: Destroying namespace "webhook-4883-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.844 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":46,"skipped":758,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:47:38.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-ca6949b1-3030-4291-b0d0-c44ee4632c6d STEP: Creating configMap with name cm-test-opt-upd-cd786e79-931b-49a1-8f7b-b4761127c0df STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ca6949b1-3030-4291-b0d0-c44ee4632c6d STEP: Updating configmap cm-test-opt-upd-cd786e79-931b-49a1-8f7b-b4761127c0df STEP: Creating configMap with name cm-test-opt-create-6e571717-9f94-460d-8e5b-269e2b253e73 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:48:47.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1875" for this suite. • [SLOW TEST:68.467 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":766,"failed":0} S ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:48:47.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 12 23:48:47.113: INFO: Waiting up to 5m0s for pod "downward-api-2e3b918f-f427-4e46-b387-27095f7ff827" in namespace "downward-api-2217" to be "Succeeded or Failed" Mar 12 23:48:47.134: INFO: Pod "downward-api-2e3b918f-f427-4e46-b387-27095f7ff827": Phase="Pending", Reason="", readiness=false. Elapsed: 21.780117ms Mar 12 23:48:49.139: INFO: Pod "downward-api-2e3b918f-f427-4e46-b387-27095f7ff827": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025909412s STEP: Saw pod success Mar 12 23:48:49.139: INFO: Pod "downward-api-2e3b918f-f427-4e46-b387-27095f7ff827" satisfied condition "Succeeded or Failed" Mar 12 23:48:49.141: INFO: Trying to get logs from node latest-worker2 pod downward-api-2e3b918f-f427-4e46-b387-27095f7ff827 container dapi-container: STEP: delete the pod Mar 12 23:48:49.175: INFO: Waiting for pod downward-api-2e3b918f-f427-4e46-b387-27095f7ff827 to disappear Mar 12 23:48:49.179: INFO: Pod downward-api-2e3b918f-f427-4e46-b387-27095f7ff827 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:48:49.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2217" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":767,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:48:49.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-ljf2 STEP: Creating a pod to test atomic-volume-subpath Mar 12 23:48:49.319: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ljf2" in namespace "subpath-2814" to be "Succeeded or Failed" Mar 12 23:48:49.323: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136797ms Mar 12 23:48:51.326: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007235451s Mar 12 23:48:53.355: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 4.036128549s Mar 12 23:48:55.358: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 6.039219001s Mar 12 23:48:57.385: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 8.066148043s Mar 12 23:48:59.388: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 10.068967134s Mar 12 23:49:01.391: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 12.072188893s Mar 12 23:49:03.394: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 14.075630727s Mar 12 23:49:05.398: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 16.079330586s Mar 12 23:49:07.401: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 18.082441724s Mar 12 23:49:09.404: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 20.085712296s Mar 12 23:49:11.409: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.090364775s STEP: Saw pod success Mar 12 23:49:11.409: INFO: Pod "pod-subpath-test-configmap-ljf2" satisfied condition "Succeeded or Failed" Mar 12 23:49:11.411: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-ljf2 container test-container-subpath-configmap-ljf2: STEP: delete the pod Mar 12 23:49:11.450: INFO: Waiting for pod pod-subpath-test-configmap-ljf2 to disappear Mar 12 23:49:11.463: INFO: Pod pod-subpath-test-configmap-ljf2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-ljf2 Mar 12 23:49:11.463: INFO: Deleting pod "pod-subpath-test-configmap-ljf2" in namespace "subpath-2814" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:49:11.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2814" for this suite. • [SLOW TEST:22.284 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":49,"skipped":784,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:49:11.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-9583 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9583 STEP: Deleting pre-stop pod Mar 12 23:49:20.580: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:49:20.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9583" for this suite. • [SLOW TEST:9.139 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":50,"skipped":813,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:49:20.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 12 23:49:20.666: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a" in namespace "projected-8629" to be "Succeeded or Failed" Mar 12 23:49:20.721: INFO: Pod "downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a": Phase="Pending", Reason="", readiness=false. Elapsed: 55.422973ms Mar 12 23:49:22.725: INFO: Pod "downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059021312s Mar 12 23:49:24.728: INFO: Pod "downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062838594s STEP: Saw pod success Mar 12 23:49:24.728: INFO: Pod "downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a" satisfied condition "Succeeded or Failed" Mar 12 23:49:24.732: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a container client-container: STEP: delete the pod Mar 12 23:49:24.750: INFO: Waiting for pod downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a to disappear Mar 12 23:49:24.755: INFO: Pod downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:49:24.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8629" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":817,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:49:24.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:49:24.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1902' Mar 12 23:49:25.204: INFO: stderr: "" Mar 12 23:49:25.205: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 12 23:49:25.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1902' Mar 12 23:49:25.451: INFO: stderr: "" Mar 12 23:49:25.451: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 12 23:49:26.455: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 23:49:26.455: INFO: Found 0 / 1 Mar 12 23:49:27.455: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 23:49:27.455: INFO: Found 1 / 1 Mar 12 23:49:27.455: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 12 23:49:27.458: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 23:49:27.458: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 12 23:49:27.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe pod agnhost-master-4ndl5 --namespace=kubectl-1902' Mar 12 23:49:27.588: INFO: stderr: "" Mar 12 23:49:27.588: INFO: stdout: "Name: agnhost-master-4ndl5\nNamespace: kubectl-1902\nPriority: 0\nNode: latest-worker/172.17.0.16\nStart Time: Thu, 12 Mar 2020 23:49:25 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.121\nIPs:\n IP: 10.244.1.121\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://d483ded9ddff467fdb61e930ef3bbf17c68878533a24eed36be126cf6e1f1ff3\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 12 Mar 2020 23:49:26 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-tsd66 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-tsd66:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-tsd66\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-1902/agnhost-master-4ndl5 to latest-worker\n Normal Pulled 2s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n" Mar 12 23:49:27.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1902' Mar 12 23:49:27.710: INFO: stderr: "" Mar 12 23:49:27.710: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1902\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-master-4ndl5\n" Mar 12 23:49:27.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1902' Mar 12 23:49:27.796: INFO: stderr: "" Mar 12 23:49:27.796: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1902\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.28.188\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.121:6379\nSession Affinity: None\nEvents: \n" Mar 12 23:49:27.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe node latest-control-plane' Mar 12 23:49:27.893: INFO: stderr: "" Mar 12 23:49:27.893: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:49:22 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Thu, 12 Mar 2020 23:49:22 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 12 Mar 2020 23:47:45 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 12 Mar 2020 23:47:45 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 12 Mar 2020 23:47:45 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 12 Mar 2020 23:47:45 +0000 Sun, 08 Mar 2020 14:50:16 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.17\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: fb03af8223ea4430b6faaad8b31da5e5\n System UUID: 220fc748-c3b9-4de4-aa76-4a3520169f00\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (8 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-gxrvh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d8h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d9h\n kube-system kindnet-gp8bt 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 4d8h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 4d9h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 4d9h\n kube-system kube-proxy-nxxmk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d8h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 4d9h\n local-path-storage local-path-provisioner-7745554f7f-52xw4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d8h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (4%) 100m (0%)\n memory 120Mi (0%) 220Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Mar 12 23:49:27.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe namespace kubectl-1902' Mar 12 23:49:27.974: INFO: stderr: "" Mar 12 23:49:27.974: INFO: stdout: "Name: kubectl-1902\nLabels: e2e-framework=kubectl\n e2e-run=4114b614-3358-44c1-8546-4721f3a73760\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:49:27.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1902" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":52,"skipped":851,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:49:27.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-4586 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4586 to expose endpoints map[] Mar 12 23:49:28.075: INFO: successfully validated that service multi-endpoint-test in namespace services-4586 exposes endpoints map[] (4.219643ms elapsed) STEP: Creating pod pod1 in namespace services-4586 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4586 to expose endpoints map[pod1:[100]] Mar 12 23:49:30.175: INFO: successfully validated that service multi-endpoint-test in namespace services-4586 exposes endpoints map[pod1:[100]] (2.085747826s elapsed) STEP: Creating pod pod2 in namespace services-4586 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4586 to expose endpoints map[pod1:[100] pod2:[101]] Mar 12 23:49:32.234: INFO: successfully validated that service multi-endpoint-test in namespace services-4586 exposes endpoints map[pod1:[100] pod2:[101]] (2.055591774s elapsed) STEP: Deleting pod pod1 in namespace services-4586 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4586 to expose endpoints map[pod2:[101]] Mar 12 23:49:33.273: INFO: successfully validated that service multi-endpoint-test in namespace services-4586 exposes endpoints map[pod2:[101]] (1.035364938s elapsed) STEP: Deleting pod pod2 in namespace services-4586 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4586 to expose endpoints map[] Mar 12 23:49:33.290: INFO: successfully validated that service multi-endpoint-test in namespace services-4586 exposes endpoints map[] (14.337544ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:49:33.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4586" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:5.362 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":53,"skipped":873,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:49:33.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:49:41.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8653" for this suite. • [SLOW TEST:8.069 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":54,"skipped":896,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:49:41.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Mar 12 23:49:41.490: INFO: Waiting up to 5m0s for pod "client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91" in namespace "containers-7675" to be "Succeeded or Failed" Mar 12 23:49:41.495: INFO: Pod "client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233433ms Mar 12 23:49:43.497: INFO: Pod "client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007092242s Mar 12 23:49:45.500: INFO: Pod "client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009794584s STEP: Saw pod success Mar 12 23:49:45.500: INFO: Pod "client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91" satisfied condition "Succeeded or Failed" Mar 12 23:49:45.502: INFO: Trying to get logs from node latest-worker pod client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91 container test-container: STEP: delete the pod Mar 12 23:49:45.520: INFO: Waiting for pod client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91 to disappear Mar 12 23:49:45.525: INFO: Pod client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:49:45.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7675" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":920,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:49:45.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 12 23:49:45.586: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea" in namespace "downward-api-1992" to be "Succeeded or Failed" Mar 12 23:49:45.622: INFO: Pod "downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea": Phase="Pending", Reason="", readiness=false. Elapsed: 35.278878ms Mar 12 23:49:47.637: INFO: Pod "downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.050731991s STEP: Saw pod success Mar 12 23:49:47.637: INFO: Pod "downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea" satisfied condition "Succeeded or Failed" Mar 12 23:49:47.639: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea container client-container: STEP: delete the pod Mar 12 23:49:47.672: INFO: Waiting for pod downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea to disappear Mar 12 23:49:47.684: INFO: Pod downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:49:47.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1992" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":933,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:49:47.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 12 23:49:47.770: INFO: PodSpec: initContainers in spec.initContainers Mar 12 23:50:36.644: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3023fc2c-180c-4a90-b03f-21b1d7fcc75c", GenerateName:"", Namespace:"init-container-3466", SelfLink:"/api/v1/namespaces/init-container-3466/pods/pod-init-3023fc2c-180c-4a90-b03f-21b1d7fcc75c", UID:"298b33f7-9229-4f5e-9d89-ea85d3bfa562", ResourceVersion:"1212310", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719653787, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"770536668"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-zjjjv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001a22080), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zjjjv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zjjjv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zjjjv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005468068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a9c000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0054680f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005468110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005468118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00546811c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653787, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653787, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653787, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653787, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.16", PodIP:"10.244.1.129", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.129"}}, StartTime:(*v1.Time)(0xc002e46120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a9c0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a9c150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://3154e5de77e44a76a48c251a9289c9e2b9715bcffa65d337c122333888d672e6", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e46180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e46160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00546819f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:50:36.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3466" for this suite. • [SLOW TEST:48.963 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":57,"skipped":952,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:50:36.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 12 23:50:36.732: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:50:51.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5177" for this suite. • [SLOW TEST:14.628 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":58,"skipped":962,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:50:51.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:50:51.336: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 12 23:50:51.380: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 12 23:50:56.383: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 12 23:50:56.383: INFO: Creating deployment "test-rolling-update-deployment" Mar 12 23:50:56.405: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 12 23:50:56.428: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 12 23:50:58.432: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 12 23:50:58.434: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 12 23:50:58.439: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4371 /apis/apps/v1/namespaces/deployment-4371/deployments/test-rolling-update-deployment 6ff85d66-d477-44ee-8e70-ee6b3f21ce99 1212449 1 2020-03-12 23:50:56 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003102a38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-12 23:50:56 +0000 UTC,LastTransitionTime:2020-03-12 23:50:56 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-03-12 23:50:57 +0000 UTC,LastTransitionTime:2020-03-12 23:50:56 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 12 23:50:58.441: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-4371 /apis/apps/v1/namespaces/deployment-4371/replicasets/test-rolling-update-deployment-664dd8fc7f 0871a3fc-0a27-4298-aed5-9ce8032bb357 1212438 1 2020-03-12 23:50:56 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 6ff85d66-d477-44ee-8e70-ee6b3f21ce99 0xc003102ff7 0xc003102ff8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003103078 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 12 23:50:58.441: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 12 23:50:58.441: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4371 /apis/apps/v1/namespaces/deployment-4371/replicasets/test-rolling-update-controller 67619f22-5a4b-4fff-a039-fdb498dafb1c 1212447 2 2020-03-12 23:50:51 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 6ff85d66-d477-44ee-8e70-ee6b3f21ce99 0xc003102f07 0xc003102f08}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003102f78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 23:50:58.443: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-8bt4m" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-8bt4m test-rolling-update-deployment-664dd8fc7f- deployment-4371 /api/v1/namespaces/deployment-4371/pods/test-rolling-update-deployment-664dd8fc7f-8bt4m 0d92ad4e-b12d-4b4f-b7af-d3ae08b76dd5 1212437 0 2020-03-12 23:50:56 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 0871a3fc-0a27-4298-aed5-9ce8032bb357 0xc003103547 0xc003103548}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4hxnn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4hxnn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4hxnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:50:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:50:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:50:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:50:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.131,StartTime:2020-03-12 23:50:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 23:50:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://a9633f6a6f0fd2aad94ffa4539ac926f2629a57c7e2bfe1b20b1f2bc46154269,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.131,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:50:58.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4371" for this suite. • [SLOW TEST:7.166 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":59,"skipped":973,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:50:58.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 12 23:50:58.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5503' Mar 12 23:50:58.768: INFO: stderr: "" Mar 12 23:50:58.768: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 12 23:50:58.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5503' Mar 12 23:50:58.855: INFO: stderr: "" Mar 12 23:50:58.855: INFO: stdout: "update-demo-nautilus-lg7hx update-demo-nautilus-vv9nv " Mar 12 23:50:58.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lg7hx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5503' Mar 12 23:50:58.932: INFO: stderr: "" Mar 12 23:50:58.932: INFO: stdout: "" Mar 12 23:50:58.932: INFO: update-demo-nautilus-lg7hx is created but not running Mar 12 23:51:03.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5503' Mar 12 23:51:04.005: INFO: stderr: "" Mar 12 23:51:04.005: INFO: stdout: "update-demo-nautilus-lg7hx update-demo-nautilus-vv9nv " Mar 12 23:51:04.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lg7hx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5503' Mar 12 23:51:04.068: INFO: stderr: "" Mar 12 23:51:04.068: INFO: stdout: "true" Mar 12 23:51:04.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lg7hx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5503' Mar 12 23:51:04.132: INFO: stderr: "" Mar 12 23:51:04.132: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 23:51:04.132: INFO: validating pod update-demo-nautilus-lg7hx Mar 12 23:51:04.166: INFO: got data: { "image": "nautilus.jpg" } Mar 12 23:51:04.166: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 23:51:04.166: INFO: update-demo-nautilus-lg7hx is verified up and running Mar 12 23:51:04.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vv9nv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5503' Mar 12 23:51:04.231: INFO: stderr: "" Mar 12 23:51:04.231: INFO: stdout: "true" Mar 12 23:51:04.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vv9nv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5503' Mar 12 23:51:04.296: INFO: stderr: "" Mar 12 23:51:04.296: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 12 23:51:04.296: INFO: validating pod update-demo-nautilus-vv9nv Mar 12 23:51:04.299: INFO: got data: { "image": "nautilus.jpg" } Mar 12 23:51:04.299: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 12 23:51:04.299: INFO: update-demo-nautilus-vv9nv is verified up and running STEP: using delete to clean up resources Mar 12 23:51:04.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5503' Mar 12 23:51:04.381: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 23:51:04.381: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 12 23:51:04.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5503' Mar 12 23:51:04.446: INFO: stderr: "No resources found in kubectl-5503 namespace.\n" Mar 12 23:51:04.446: INFO: stdout: "" Mar 12 23:51:04.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5503 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 12 23:51:04.509: INFO: stderr: "" Mar 12 23:51:04.509: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:04.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5503" for this suite. • [SLOW TEST:6.066 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":60,"skipped":975,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:04.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:51:04.590: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 12 23:51:04.598: INFO: Number of nodes with available pods: 0 Mar 12 23:51:04.598: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 12 23:51:04.645: INFO: Number of nodes with available pods: 0 Mar 12 23:51:04.645: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 23:51:05.648: INFO: Number of nodes with available pods: 0 Mar 12 23:51:05.648: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 23:51:06.649: INFO: Number of nodes with available pods: 1 Mar 12 23:51:06.649: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 12 23:51:06.678: INFO: Number of nodes with available pods: 1 Mar 12 23:51:06.679: INFO: Number of running nodes: 0, number of available pods: 1 Mar 12 23:51:07.683: INFO: Number of nodes with available pods: 0 Mar 12 23:51:07.683: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 12 23:51:07.695: INFO: Number of nodes with available pods: 0 Mar 12 23:51:07.695: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 23:51:08.700: INFO: Number of nodes with available pods: 0 Mar 12 23:51:08.700: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 23:51:09.698: INFO: Number of nodes with available pods: 0 Mar 12 23:51:09.698: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 23:51:10.699: INFO: Number of nodes with available pods: 0 Mar 12 23:51:10.699: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 23:51:11.700: INFO: Number of nodes with available pods: 0 Mar 12 23:51:11.700: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 23:51:12.710: INFO: Number of nodes with available pods: 0 Mar 12 23:51:12.710: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 23:51:13.699: INFO: Number of nodes with available pods: 0 Mar 12 23:51:13.699: INFO: Node latest-worker2 is running more than one daemon pod Mar 12 23:51:14.699: INFO: Number of nodes with available pods: 1 Mar 12 23:51:14.699: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4537, will wait for the garbage collector to delete the pods Mar 12 23:51:14.761: INFO: Deleting DaemonSet.extensions daemon-set took: 5.077949ms Mar 12 23:51:15.061: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.272661ms Mar 12 23:51:22.164: INFO: Number of nodes with available pods: 0 Mar 12 23:51:22.164: INFO: Number of running nodes: 0, number of available pods: 0 Mar 12 23:51:22.169: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4537/daemonsets","resourceVersion":"1212654"},"items":null} Mar 12 23:51:22.171: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4537/pods","resourceVersion":"1212654"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:22.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4537" for this suite. • [SLOW TEST:17.697 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":61,"skipped":981,"failed":0} SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:22.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-592 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-592 to expose endpoints map[] Mar 12 23:51:22.310: INFO: Get endpoints failed (6.727828ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 12 23:51:23.313: INFO: successfully validated that service endpoint-test2 in namespace services-592 exposes endpoints map[] (1.009733441s elapsed) STEP: Creating pod pod1 in namespace services-592 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-592 to expose endpoints map[pod1:[80]] Mar 12 23:51:25.359: INFO: successfully validated that service endpoint-test2 in namespace services-592 exposes endpoints map[pod1:[80]] (2.041292419s elapsed) STEP: Creating pod pod2 in namespace services-592 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-592 to expose endpoints map[pod1:[80] pod2:[80]] Mar 12 23:51:27.433: INFO: successfully validated that service endpoint-test2 in namespace services-592 exposes endpoints map[pod1:[80] pod2:[80]] (2.069914869s elapsed) STEP: Deleting pod pod1 in namespace services-592 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-592 to expose endpoints map[pod2:[80]] Mar 12 23:51:27.464: INFO: successfully validated that service endpoint-test2 in namespace services-592 exposes endpoints map[pod2:[80]] (28.35933ms elapsed) STEP: Deleting pod pod2 in namespace services-592 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-592 to expose endpoints map[] Mar 12 23:51:27.491: INFO: successfully validated that service endpoint-test2 in namespace services-592 exposes endpoints map[] (21.648012ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:27.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-592" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:5.313 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":62,"skipped":985,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:27.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:27.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4342" for this suite. STEP: Destroying namespace "nspatchtest-26467b84-4864-4bb9-a597-d262148428e0-2" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":63,"skipped":990,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:27.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:27.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4820" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":64,"skipped":1004,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:27.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 23:51:28.453: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 23:51:31.485: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:31.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3985" for this suite. STEP: Destroying namespace "webhook-3985-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":65,"skipped":1005,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:31.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 12 23:51:31.630: INFO: namespace kubectl-9721 Mar 12 23:51:31.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9721' Mar 12 23:51:31.854: INFO: stderr: "" Mar 12 23:51:31.854: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 12 23:51:32.890: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 23:51:32.890: INFO: Found 0 / 1 Mar 12 23:51:33.858: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 23:51:33.858: INFO: Found 1 / 1 Mar 12 23:51:33.858: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 12 23:51:33.861: INFO: Selector matched 1 pods for map[app:agnhost] Mar 12 23:51:33.861: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 12 23:51:33.861: INFO: wait on agnhost-master startup in kubectl-9721 Mar 12 23:51:33.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs agnhost-master-wq6dw agnhost-master --namespace=kubectl-9721' Mar 12 23:51:33.972: INFO: stderr: "" Mar 12 23:51:33.972: INFO: stdout: "Paused\n" STEP: exposing RC Mar 12 23:51:33.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9721' Mar 12 23:51:34.065: INFO: stderr: "" Mar 12 23:51:34.065: INFO: stdout: "service/rm2 exposed\n" Mar 12 23:51:34.079: INFO: Service rm2 in namespace kubectl-9721 found. STEP: exposing service Mar 12 23:51:36.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9721' Mar 12 23:51:36.241: INFO: stderr: "" Mar 12 23:51:36.241: INFO: stdout: "service/rm3 exposed\n" Mar 12 23:51:36.266: INFO: Service rm3 in namespace kubectl-9721 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:38.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9721" for this suite. • [SLOW TEST:6.700 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":66,"skipped":1017,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:38.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Mar 12 23:51:40.867: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8127 pod-service-account-7eaa98f4-fd25-4577-8b5d-07a46ac6a21a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 12 23:51:41.104: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8127 pod-service-account-7eaa98f4-fd25-4577-8b5d-07a46ac6a21a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 12 23:51:41.279: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8127 pod-service-account-7eaa98f4-fd25-4577-8b5d-07a46ac6a21a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:41.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8127" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":67,"skipped":1055,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:41.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:51:41.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config version' Mar 12 23:51:41.578: INFO: stderr: "" Mar 12 23:51:41.579: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.749+55bb72b77444f7\", GitCommit:\"55bb72b77444f7279fb268652df377422792c9f0\", GitTreeState:\"clean\", BuildDate:\"2020-03-12T17:51:58Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:41.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1104" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":68,"skipped":1060,"failed":0} ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:41.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Mar 12 23:51:41.664: INFO: Waiting up to 5m0s for pod "var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69" in namespace "var-expansion-5688" to be "Succeeded or Failed" Mar 12 23:51:41.669: INFO: Pod "var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69": Phase="Pending", Reason="", readiness=false. Elapsed: 5.124049ms Mar 12 23:51:43.672: INFO: Pod "var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008433878s Mar 12 23:51:45.676: INFO: Pod "var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012321015s STEP: Saw pod success Mar 12 23:51:45.676: INFO: Pod "var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69" satisfied condition "Succeeded or Failed" Mar 12 23:51:45.679: INFO: Trying to get logs from node latest-worker pod var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69 container dapi-container: STEP: delete the pod Mar 12 23:51:45.712: INFO: Waiting for pod var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69 to disappear Mar 12 23:51:45.715: INFO: Pod var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:45.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5688" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1060,"failed":0} SSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:45.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Mar 12 23:51:45.795: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-76" to be "Succeeded or Failed" Mar 12 23:51:45.798: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.57293ms Mar 12 23:51:47.801: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006037136s Mar 12 23:51:49.804: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009419917s STEP: Saw pod success Mar 12 23:51:49.804: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 12 23:51:49.807: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 12 23:51:49.835: INFO: Waiting for pod pod-host-path-test to disappear Mar 12 23:51:49.845: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:49.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-76" for this suite. •{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1064,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:49.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0312 23:51:51.046342 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 12 23:51:51.046: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:51.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-212" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":71,"skipped":1076,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:51.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 12 23:51:51.188: INFO: Waiting up to 5m0s for pod "pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d" in namespace "emptydir-8295" to be "Succeeded or Failed" Mar 12 23:51:51.193: INFO: Pod "pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357484ms Mar 12 23:51:53.196: INFO: Pod "pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007973387s STEP: Saw pod success Mar 12 23:51:53.196: INFO: Pod "pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d" satisfied condition "Succeeded or Failed" Mar 12 23:51:53.199: INFO: Trying to get logs from node latest-worker pod pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d container test-container: STEP: delete the pod Mar 12 23:51:53.257: INFO: Waiting for pod pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d to disappear Mar 12 23:51:53.263: INFO: Pod pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:53.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8295" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:53.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Mar 12 23:51:53.330: INFO: Waiting up to 5m0s for pod "var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1" in namespace "var-expansion-1048" to be "Succeeded or Failed" Mar 12 23:51:53.334: INFO: Pod "var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.402284ms Mar 12 23:51:55.337: INFO: Pod "var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007266216s STEP: Saw pod success Mar 12 23:51:55.337: INFO: Pod "var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1" satisfied condition "Succeeded or Failed" Mar 12 23:51:55.340: INFO: Trying to get logs from node latest-worker pod var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1 container dapi-container: STEP: delete the pod Mar 12 23:51:55.395: INFO: Waiting for pod var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1 to disappear Mar 12 23:51:55.400: INFO: Pod var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:51:55.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1048" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1161,"failed":0} S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:51:55.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-422cfc61-0d34-4ee0-a81d-a4e804dc7f66 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-422cfc61-0d34-4ee0-a81d-a4e804dc7f66 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:53:08.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9198" for this suite. • [SLOW TEST:72.785 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1162,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:53:08.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 23:53:08.827: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 23:53:10.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653988, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653988, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653988, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653988, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 23:53:13.881: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:53:13.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2051-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:53:15.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-515" for this suite. STEP: Destroying namespace "webhook-515-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.033 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":75,"skipped":1177,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:53:15.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-11393df6-e659-4d44-83b3-4b3cbb7ad85d STEP: Creating a pod to test consume configMaps Mar 12 23:53:15.298: INFO: Waiting up to 5m0s for pod "pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f" in namespace "configmap-6669" to be "Succeeded or Failed" Mar 12 23:53:15.328: INFO: Pod "pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.67176ms Mar 12 23:53:17.331: INFO: Pod "pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.033921172s STEP: Saw pod success Mar 12 23:53:17.332: INFO: Pod "pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f" satisfied condition "Succeeded or Failed" Mar 12 23:53:17.334: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f container configmap-volume-test: STEP: delete the pod Mar 12 23:53:17.362: INFO: Waiting for pod pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f to disappear Mar 12 23:53:17.366: INFO: Pod pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:53:17.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6669" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1194,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:53:17.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Mar 12 23:53:17.462: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 12 23:53:17.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-838' Mar 12 23:53:17.788: INFO: stderr: "" Mar 12 23:53:17.788: INFO: stdout: "service/agnhost-slave created\n" Mar 12 23:53:17.788: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 12 23:53:17.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-838' Mar 12 23:53:18.048: INFO: stderr: "" Mar 12 23:53:18.048: INFO: stdout: "service/agnhost-master created\n" Mar 12 23:53:18.048: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 12 23:53:18.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-838' Mar 12 23:53:18.321: INFO: stderr: "" Mar 12 23:53:18.321: INFO: stdout: "service/frontend created\n" Mar 12 23:53:18.321: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 12 23:53:18.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-838' Mar 12 23:53:18.527: INFO: stderr: "" Mar 12 23:53:18.527: INFO: stdout: "deployment.apps/frontend created\n" Mar 12 23:53:18.527: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 12 23:53:18.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-838' Mar 12 23:53:18.848: INFO: stderr: "" Mar 12 23:53:18.848: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 12 23:53:18.849: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 12 23:53:18.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-838' Mar 12 23:53:19.187: INFO: stderr: "" Mar 12 23:53:19.187: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 12 23:53:19.187: INFO: Waiting for all frontend pods to be Running. Mar 12 23:53:24.238: INFO: Waiting for frontend to serve content. Mar 12 23:53:24.247: INFO: Trying to add a new entry to the guestbook. Mar 12 23:53:24.257: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 12 23:53:24.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-838' Mar 12 23:53:24.388: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 23:53:24.388: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 12 23:53:24.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-838' Mar 12 23:53:24.520: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 23:53:24.520: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 12 23:53:24.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-838' Mar 12 23:53:24.634: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 23:53:24.634: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 12 23:53:24.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-838' Mar 12 23:53:24.705: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 23:53:24.705: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 12 23:53:24.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-838' Mar 12 23:53:24.796: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 23:53:24.796: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 12 23:53:24.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-838' Mar 12 23:53:24.891: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 12 23:53:24.891: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:53:24.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-838" for this suite. • [SLOW TEST:7.552 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":77,"skipped":1216,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:53:24.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:53:25.076: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 12 23:53:27.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 create -f -' Mar 12 23:53:29.953: INFO: stderr: "" Mar 12 23:53:29.953: INFO: stdout: "e2e-test-crd-publish-openapi-8392-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 12 23:53:29.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 delete e2e-test-crd-publish-openapi-8392-crds test-foo' Mar 12 23:53:30.058: INFO: stderr: "" Mar 12 23:53:30.058: INFO: stdout: "e2e-test-crd-publish-openapi-8392-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 12 23:53:30.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 apply -f -' Mar 12 23:53:30.330: INFO: stderr: "" Mar 12 23:53:30.330: INFO: stdout: "e2e-test-crd-publish-openapi-8392-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 12 23:53:30.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 delete e2e-test-crd-publish-openapi-8392-crds test-foo' Mar 12 23:53:30.411: INFO: stderr: "" Mar 12 23:53:30.411: INFO: stdout: "e2e-test-crd-publish-openapi-8392-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 12 23:53:30.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 create -f -' Mar 12 23:53:30.856: INFO: rc: 1 Mar 12 23:53:30.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 apply -f -' Mar 12 23:53:31.098: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 12 23:53:31.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 create -f -' Mar 12 23:53:31.321: INFO: rc: 1 Mar 12 23:53:31.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 apply -f -' Mar 12 23:53:31.513: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 12 23:53:31.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8392-crds' Mar 12 23:53:31.722: INFO: stderr: "" Mar 12 23:53:31.722: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8392-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 12 23:53:31.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8392-crds.metadata' Mar 12 23:53:31.968: INFO: stderr: "" Mar 12 23:53:31.968: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8392-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 12 23:53:31.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8392-crds.spec' Mar 12 23:53:32.245: INFO: stderr: "" Mar 12 23:53:32.245: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8392-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 12 23:53:32.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8392-crds.spec.bars' Mar 12 23:53:32.433: INFO: stderr: "" Mar 12 23:53:32.433: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8392-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 12 23:53:32.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8392-crds.spec.bars2' Mar 12 23:53:32.611: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:53:35.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8613" for this suite. • [SLOW TEST:10.379 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":78,"skipped":1224,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:53:35.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-06d4eeef-caba-4a6e-9235-0ac21a8d0723 STEP: Creating a pod to test consume configMaps Mar 12 23:53:35.380: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-786ce75c-84fa-44b5-bdcc-bd244ffc8a44" in namespace "projected-3291" to be "Succeeded or Failed" Mar 12 23:53:35.385: INFO: Pod "pod-projected-configmaps-786ce75c-84fa-44b5-bdcc-bd244ffc8a44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.76944ms Mar 12 23:53:37.388: INFO: Pod "pod-projected-configmaps-786ce75c-84fa-44b5-bdcc-bd244ffc8a44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008020943s STEP: Saw pod success Mar 12 23:53:37.388: INFO: Pod "pod-projected-configmaps-786ce75c-84fa-44b5-bdcc-bd244ffc8a44" satisfied condition "Succeeded or Failed" Mar 12 23:53:37.390: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-786ce75c-84fa-44b5-bdcc-bd244ffc8a44 container projected-configmap-volume-test: STEP: delete the pod Mar 12 23:53:37.406: INFO: Waiting for pod pod-projected-configmaps-786ce75c-84fa-44b5-bdcc-bd244ffc8a44 to disappear Mar 12 23:53:37.410: INFO: Pod pod-projected-configmaps-786ce75c-84fa-44b5-bdcc-bd244ffc8a44 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:53:37.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3291" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1226,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:53:37.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 12 23:53:38.395: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 23:53:41.411: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:53:41.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:53:42.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7791" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:5.310 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":80,"skipped":1227,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:53:42.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 23:53:43.517: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 23:53:46.548: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:53:46.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8137" for this suite. STEP: Destroying namespace "webhook-8137-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":81,"skipped":1239,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:53:47.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:53:49.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4420" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1242,"failed":0} SS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:53:49.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-3faa89ab-140c-4072-afa7-7ff3148264eb STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:53:51.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9146" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1244,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:53:51.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 12 23:53:53.959: INFO: Successfully updated pod "labelsupdate42b20e9e-a875-4535-8033-991bf648cd86" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:53:55.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4106" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1247,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:53:55.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 23:53:56.867: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 23:53:58.884: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719654036, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719654036, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719654036, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719654036, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 23:54:01.896: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:54:12.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6092" for this suite. STEP: Destroying namespace "webhook-6092-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.184 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":85,"skipped":1247,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:54:12.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Mar 12 23:54:12.225: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:54:12.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2328" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":86,"skipped":1251,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:54:12.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-fdb5555f-3ea2-42be-bdde-66328ea9b0e4 STEP: Creating a pod to test consume secrets Mar 12 23:54:12.376: INFO: Waiting up to 5m0s for pod "pod-secrets-3ab33ad1-d22b-446c-8286-1cdf3129053b" in namespace "secrets-249" to be "Succeeded or Failed" Mar 12 23:54:12.381: INFO: Pod "pod-secrets-3ab33ad1-d22b-446c-8286-1cdf3129053b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.360842ms Mar 12 23:54:14.384: INFO: Pod "pod-secrets-3ab33ad1-d22b-446c-8286-1cdf3129053b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007372533s STEP: Saw pod success Mar 12 23:54:14.384: INFO: Pod "pod-secrets-3ab33ad1-d22b-446c-8286-1cdf3129053b" satisfied condition "Succeeded or Failed" Mar 12 23:54:14.386: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-3ab33ad1-d22b-446c-8286-1cdf3129053b container secret-env-test: STEP: delete the pod Mar 12 23:54:14.433: INFO: Waiting for pod pod-secrets-3ab33ad1-d22b-446c-8286-1cdf3129053b to disappear Mar 12 23:54:14.436: INFO: Pod pod-secrets-3ab33ad1-d22b-446c-8286-1cdf3129053b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:54:14.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-249" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1259,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:54:14.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 12 23:54:17.612: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:54:18.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8058" for this suite. •{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":88,"skipped":1275,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:54:18.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-3aa99f6d-f687-4be6-ae31-9d08a59c31a9 in namespace container-probe-4064 Mar 12 23:54:20.753: INFO: Started pod liveness-3aa99f6d-f687-4be6-ae31-9d08a59c31a9 in namespace container-probe-4064 STEP: checking the pod's current state and verifying that restartCount is present Mar 12 23:54:20.756: INFO: Initial restart count of pod liveness-3aa99f6d-f687-4be6-ae31-9d08a59c31a9 is 0 Mar 12 23:54:38.801: INFO: Restart count of pod container-probe-4064/liveness-3aa99f6d-f687-4be6-ae31-9d08a59c31a9 is now 1 (18.044989393s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:54:38.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4064" for this suite. • [SLOW TEST:20.200 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1288,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:54:38.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2612 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2612 STEP: creating replication controller externalsvc in namespace services-2612 I0312 23:54:39.055380 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2612, replica count: 2 I0312 23:54:42.105763 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 12 23:54:42.138: INFO: Creating new exec pod Mar 12 23:54:44.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-2612 execpodgzvdd -- /bin/sh -x -c nslookup nodeport-service' Mar 12 23:54:44.340: INFO: stderr: "I0312 23:54:44.251495 1903 log.go:172] (0xc00003a160) (0xc000809180) Create stream\nI0312 23:54:44.251524 1903 log.go:172] (0xc00003a160) (0xc000809180) Stream added, broadcasting: 1\nI0312 23:54:44.253004 1903 log.go:172] (0xc00003a160) Reply frame received for 1\nI0312 23:54:44.253024 1903 log.go:172] (0xc00003a160) (0xc000809360) Create stream\nI0312 23:54:44.253031 1903 log.go:172] (0xc00003a160) (0xc000809360) Stream added, broadcasting: 3\nI0312 23:54:44.253574 1903 log.go:172] (0xc00003a160) Reply frame received for 3\nI0312 23:54:44.253593 1903 log.go:172] (0xc00003a160) (0xc000582000) Create stream\nI0312 23:54:44.253601 1903 log.go:172] (0xc00003a160) (0xc000582000) Stream added, broadcasting: 5\nI0312 23:54:44.254146 1903 log.go:172] (0xc00003a160) Reply frame received for 5\nI0312 23:54:44.331044 1903 log.go:172] (0xc00003a160) Data frame received for 5\nI0312 23:54:44.331063 1903 log.go:172] (0xc000582000) (5) Data frame handling\nI0312 23:54:44.331073 1903 log.go:172] (0xc000582000) (5) Data frame sent\n+ nslookup nodeport-service\nI0312 23:54:44.335862 1903 log.go:172] (0xc00003a160) Data frame received for 3\nI0312 23:54:44.335875 1903 log.go:172] (0xc000809360) (3) Data frame handling\nI0312 23:54:44.335884 1903 log.go:172] (0xc000809360) (3) Data frame sent\nI0312 23:54:44.336301 1903 log.go:172] (0xc00003a160) Data frame received for 3\nI0312 23:54:44.336315 1903 log.go:172] (0xc000809360) (3) Data frame handling\nI0312 23:54:44.336329 1903 log.go:172] (0xc000809360) (3) Data frame sent\nI0312 23:54:44.336522 1903 log.go:172] (0xc00003a160) Data frame received for 3\nI0312 23:54:44.336535 1903 log.go:172] (0xc000809360) (3) Data frame handling\nI0312 23:54:44.336574 1903 log.go:172] (0xc00003a160) Data frame received for 5\nI0312 23:54:44.336582 1903 log.go:172] (0xc000582000) (5) Data frame handling\nI0312 23:54:44.337699 1903 log.go:172] (0xc00003a160) Data frame received for 1\nI0312 23:54:44.337711 1903 log.go:172] (0xc000809180) (1) Data frame handling\nI0312 23:54:44.337716 1903 log.go:172] (0xc000809180) (1) Data frame sent\nI0312 23:54:44.337724 1903 log.go:172] (0xc00003a160) (0xc000809180) Stream removed, broadcasting: 1\nI0312 23:54:44.337760 1903 log.go:172] (0xc00003a160) Go away received\nI0312 23:54:44.337934 1903 log.go:172] (0xc00003a160) (0xc000809180) Stream removed, broadcasting: 1\nI0312 23:54:44.337944 1903 log.go:172] (0xc00003a160) (0xc000809360) Stream removed, broadcasting: 3\nI0312 23:54:44.337948 1903 log.go:172] (0xc00003a160) (0xc000582000) Stream removed, broadcasting: 5\n" Mar 12 23:54:44.340: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2612.svc.cluster.local\tcanonical name = externalsvc.services-2612.svc.cluster.local.\nName:\texternalsvc.services-2612.svc.cluster.local\nAddress: 10.96.231.110\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2612, will wait for the garbage collector to delete the pods Mar 12 23:54:44.397: INFO: Deleting ReplicationController externalsvc took: 4.701811ms Mar 12 23:54:44.697: INFO: Terminating ReplicationController externalsvc pods took: 300.181456ms Mar 12 23:54:52.520: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:54:52.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2612" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:13.762 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":90,"skipped":1343,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:54:52.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-83c4e300-53cb-4483-82cc-d984174f3f5a STEP: Creating a pod to test consume configMaps Mar 12 23:54:52.679: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a49b782-c70f-4ac8-86b6-2431d67dbe15" in namespace "projected-6476" to be "Succeeded or Failed" Mar 12 23:54:52.683: INFO: Pod "pod-projected-configmaps-2a49b782-c70f-4ac8-86b6-2431d67dbe15": Phase="Pending", Reason="", readiness=false. Elapsed: 3.413349ms Mar 12 23:54:54.686: INFO: Pod "pod-projected-configmaps-2a49b782-c70f-4ac8-86b6-2431d67dbe15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007293871s STEP: Saw pod success Mar 12 23:54:54.686: INFO: Pod "pod-projected-configmaps-2a49b782-c70f-4ac8-86b6-2431d67dbe15" satisfied condition "Succeeded or Failed" Mar 12 23:54:54.689: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-2a49b782-c70f-4ac8-86b6-2431d67dbe15 container projected-configmap-volume-test: STEP: delete the pod Mar 12 23:54:54.718: INFO: Waiting for pod pod-projected-configmaps-2a49b782-c70f-4ac8-86b6-2431d67dbe15 to disappear Mar 12 23:54:54.724: INFO: Pod pod-projected-configmaps-2a49b782-c70f-4ac8-86b6-2431d67dbe15 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:54:54.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6476" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1366,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:54:54.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 23:54:55.061: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 12 23:54:57.072: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719654095, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719654095, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719654095, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719654095, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 23:55:00.102: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:55:00.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7161" for this suite. STEP: Destroying namespace "webhook-7161-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.462 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":92,"skipped":1373,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:55:00.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 12 23:55:00.252: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 12 23:55:00.261: INFO: Waiting for terminating namespaces to be deleted... Mar 12 23:55:00.282: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 12 23:55:00.301: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 12 23:55:00.301: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 23:55:00.301: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 12 23:55:00.301: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 23:55:00.301: INFO: sample-webhook-deployment-6cc9cc9dc-fqv45 from webhook-7161 started at 2020-03-12 23:54:55 +0000 UTC (1 container statuses recorded) Mar 12 23:55:00.301: INFO: Container sample-webhook ready: true, restart count 0 Mar 12 23:55:00.301: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 12 23:55:00.333: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 12 23:55:00.333: INFO: Container kube-proxy ready: true, restart count 0 Mar 12 23:55:00.333: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 12 23:55:00.333: INFO: Container kindnet-cni ready: true, restart count 0 Mar 12 23:55:00.333: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 12 23:55:00.333: INFO: Container coredns ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e531f5f5-3d74-4c80-bb1b-db404ff486e9 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-e531f5f5-3d74-4c80-bb1b-db404ff486e9 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-e531f5f5-3d74-4c80-bb1b-db404ff486e9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:55:08.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2735" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.295 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":93,"skipped":1383,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:55:08.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-dafafd42-da4f-4a72-a3eb-5e2077f4940d STEP: Creating a pod to test consume configMaps Mar 12 23:55:08.656: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-296b1674-a758-47c7-8f1f-95a0345ab70e" in namespace "projected-9913" to be "Succeeded or Failed" Mar 12 23:55:08.678: INFO: Pod "pod-projected-configmaps-296b1674-a758-47c7-8f1f-95a0345ab70e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.992693ms Mar 12 23:55:10.681: INFO: Pod "pod-projected-configmaps-296b1674-a758-47c7-8f1f-95a0345ab70e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025660066s STEP: Saw pod success Mar 12 23:55:10.681: INFO: Pod "pod-projected-configmaps-296b1674-a758-47c7-8f1f-95a0345ab70e" satisfied condition "Succeeded or Failed" Mar 12 23:55:10.684: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-296b1674-a758-47c7-8f1f-95a0345ab70e container projected-configmap-volume-test: STEP: delete the pod Mar 12 23:55:10.697: INFO: Waiting for pod pod-projected-configmaps-296b1674-a758-47c7-8f1f-95a0345ab70e to disappear Mar 12 23:55:10.713: INFO: Pod pod-projected-configmaps-296b1674-a758-47c7-8f1f-95a0345ab70e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:55:10.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9913" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1385,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:55:10.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 12 23:55:16.857: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 23:55:16.880: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 23:55:18.880: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 23:55:18.887: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 23:55:20.880: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 23:55:20.884: INFO: Pod pod-with-poststart-exec-hook still exists Mar 12 23:55:22.880: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 12 23:55:22.884: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:55:22.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4011" for this suite. • [SLOW TEST:12.171 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1420,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:55:22.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:55:22.961: INFO: Waiting up to 5m0s for pod "busybox-user-65534-843835f2-9091-464b-a7d5-906319e2dbe3" in namespace "security-context-test-2730" to be "Succeeded or Failed" Mar 12 23:55:22.993: INFO: Pod "busybox-user-65534-843835f2-9091-464b-a7d5-906319e2dbe3": Phase="Pending", Reason="", readiness=false. Elapsed: 32.023953ms Mar 12 23:55:24.996: INFO: Pod "busybox-user-65534-843835f2-9091-464b-a7d5-906319e2dbe3": Phase="Running", Reason="", readiness=true. Elapsed: 2.035830484s Mar 12 23:55:26.999: INFO: Pod "busybox-user-65534-843835f2-9091-464b-a7d5-906319e2dbe3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038633001s Mar 12 23:55:26.999: INFO: Pod "busybox-user-65534-843835f2-9091-464b-a7d5-906319e2dbe3" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:55:26.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2730" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1421,"failed":0} SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:55:27.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:55:31.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7102" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":97,"skipped":1423,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:55:31.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 12 23:55:31.743: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 12 23:55:34.810: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:55:35.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-152" for this suite. STEP: Destroying namespace "webhook-152-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":98,"skipped":1437,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:55:35.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 12 23:55:43.555: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2696 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 23:55:43.555: INFO: >>> kubeConfig: /root/.kube/config I0312 23:55:43.576194 7 log.go:172] (0xc002dc5130) (0xc0015110e0) Create stream I0312 23:55:43.576230 7 log.go:172] (0xc002dc5130) (0xc0015110e0) Stream added, broadcasting: 1 I0312 23:55:43.577968 7 log.go:172] (0xc002dc5130) Reply frame received for 1 I0312 23:55:43.578008 7 log.go:172] (0xc002dc5130) (0xc0014daa00) Create stream I0312 23:55:43.578021 7 log.go:172] (0xc002dc5130) (0xc0014daa00) Stream added, broadcasting: 3 I0312 23:55:43.579547 7 log.go:172] (0xc002dc5130) Reply frame received for 3 I0312 23:55:43.579589 7 log.go:172] (0xc002dc5130) (0xc001c32c80) Create stream I0312 23:55:43.579607 7 log.go:172] (0xc002dc5130) (0xc001c32c80) Stream added, broadcasting: 5 I0312 23:55:43.580366 7 log.go:172] (0xc002dc5130) Reply frame received for 5 I0312 23:55:43.652405 7 log.go:172] (0xc002dc5130) Data frame received for 3 I0312 23:55:43.652439 7 log.go:172] (0xc0014daa00) (3) Data frame handling I0312 23:55:43.652453 7 log.go:172] (0xc0014daa00) (3) Data frame sent I0312 23:55:43.652464 7 log.go:172] (0xc002dc5130) Data frame received for 3 I0312 23:55:43.652475 7 log.go:172] (0xc0014daa00) (3) Data frame handling I0312 23:55:43.652531 7 log.go:172] (0xc002dc5130) Data frame received for 5 I0312 23:55:43.652565 7 log.go:172] (0xc001c32c80) (5) Data frame handling I0312 23:55:43.653860 7 log.go:172] (0xc002dc5130) Data frame received for 1 I0312 23:55:43.653875 7 log.go:172] (0xc0015110e0) (1) Data frame handling I0312 23:55:43.653890 7 log.go:172] (0xc0015110e0) (1) Data frame sent I0312 23:55:43.653904 7 log.go:172] (0xc002dc5130) (0xc0015110e0) Stream removed, broadcasting: 1 I0312 23:55:43.653918 7 log.go:172] (0xc002dc5130) Go away received I0312 23:55:43.654274 7 log.go:172] (0xc002dc5130) (0xc0015110e0) Stream removed, broadcasting: 1 I0312 23:55:43.654287 7 log.go:172] (0xc002dc5130) (0xc0014daa00) Stream removed, broadcasting: 3 I0312 23:55:43.654292 7 log.go:172] (0xc002dc5130) (0xc001c32c80) Stream removed, broadcasting: 5 Mar 12 23:55:43.654: INFO: Exec stderr: "" Mar 12 23:55:43.654: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2696 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 23:55:43.654: INFO: >>> kubeConfig: /root/.kube/config I0312 23:55:43.677799 7 log.go:172] (0xc00271db80) (0xc001c330e0) Create stream I0312 23:55:43.677816 7 log.go:172] (0xc00271db80) (0xc001c330e0) Stream added, broadcasting: 1 I0312 23:55:43.679855 7 log.go:172] (0xc00271db80) Reply frame received for 1 I0312 23:55:43.679891 7 log.go:172] (0xc00271db80) (0xc0014dab40) Create stream I0312 23:55:43.679902 7 log.go:172] (0xc00271db80) (0xc0014dab40) Stream added, broadcasting: 3 I0312 23:55:43.680785 7 log.go:172] (0xc00271db80) Reply frame received for 3 I0312 23:55:43.680819 7 log.go:172] (0xc00271db80) (0xc0014dabe0) Create stream I0312 23:55:43.680831 7 log.go:172] (0xc00271db80) (0xc0014dabe0) Stream added, broadcasting: 5 I0312 23:55:43.681545 7 log.go:172] (0xc00271db80) Reply frame received for 5 I0312 23:55:43.733207 7 log.go:172] (0xc00271db80) Data frame received for 3 I0312 23:55:43.733232 7 log.go:172] (0xc0014dab40) (3) Data frame handling I0312 23:55:43.733248 7 log.go:172] (0xc00271db80) Data frame received for 5 I0312 23:55:43.733265 7 log.go:172] (0xc0014dabe0) (5) Data frame handling I0312 23:55:43.733281 7 log.go:172] (0xc0014dab40) (3) Data frame sent I0312 23:55:43.733289 7 log.go:172] (0xc00271db80) Data frame received for 3 I0312 23:55:43.733297 7 log.go:172] (0xc0014dab40) (3) Data frame handling I0312 23:55:43.734471 7 log.go:172] (0xc00271db80) Data frame received for 1 I0312 23:55:43.734485 7 log.go:172] (0xc001c330e0) (1) Data frame handling I0312 23:55:43.734494 7 log.go:172] (0xc001c330e0) (1) Data frame sent I0312 23:55:43.734505 7 log.go:172] (0xc00271db80) (0xc001c330e0) Stream removed, broadcasting: 1 I0312 23:55:43.734576 7 log.go:172] (0xc00271db80) (0xc001c330e0) Stream removed, broadcasting: 1 I0312 23:55:43.734586 7 log.go:172] (0xc00271db80) (0xc0014dab40) Stream removed, broadcasting: 3 I0312 23:55:43.734601 7 log.go:172] (0xc00271db80) Go away received I0312 23:55:43.734698 7 log.go:172] (0xc00271db80) (0xc0014dabe0) Stream removed, broadcasting: 5 Mar 12 23:55:43.734: INFO: Exec stderr: "" Mar 12 23:55:43.734: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2696 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 23:55:43.734: INFO: >>> kubeConfig: /root/.kube/config I0312 23:55:43.757198 7 log.go:172] (0xc005de6d10) (0xc0014db220) Create stream I0312 23:55:43.757221 7 log.go:172] (0xc005de6d10) (0xc0014db220) Stream added, broadcasting: 1 I0312 23:55:43.760428 7 log.go:172] (0xc005de6d10) Reply frame received for 1 I0312 23:55:43.760471 7 log.go:172] (0xc005de6d10) (0xc0014db360) Create stream I0312 23:55:43.760487 7 log.go:172] (0xc005de6d10) (0xc0014db360) Stream added, broadcasting: 3 I0312 23:55:43.763928 7 log.go:172] (0xc005de6d10) Reply frame received for 3 I0312 23:55:43.763970 7 log.go:172] (0xc005de6d10) (0xc0017581e0) Create stream I0312 23:55:43.763981 7 log.go:172] (0xc005de6d10) (0xc0017581e0) Stream added, broadcasting: 5 I0312 23:55:43.767021 7 log.go:172] (0xc005de6d10) Reply frame received for 5 I0312 23:55:43.820827 7 log.go:172] (0xc005de6d10) Data frame received for 3 I0312 23:55:43.820848 7 log.go:172] (0xc005de6d10) Data frame received for 5 I0312 23:55:43.820858 7 log.go:172] (0xc0017581e0) (5) Data frame handling I0312 23:55:43.820878 7 log.go:172] (0xc0014db360) (3) Data frame handling I0312 23:55:43.820890 7 log.go:172] (0xc0014db360) (3) Data frame sent I0312 23:55:43.820897 7 log.go:172] (0xc005de6d10) Data frame received for 3 I0312 23:55:43.820901 7 log.go:172] (0xc0014db360) (3) Data frame handling I0312 23:55:43.822193 7 log.go:172] (0xc005de6d10) Data frame received for 1 I0312 23:55:43.822208 7 log.go:172] (0xc0014db220) (1) Data frame handling I0312 23:55:43.822221 7 log.go:172] (0xc0014db220) (1) Data frame sent I0312 23:55:43.822496 7 log.go:172] (0xc005de6d10) (0xc0014db220) Stream removed, broadcasting: 1 I0312 23:55:43.822512 7 log.go:172] (0xc005de6d10) Go away received I0312 23:55:43.822636 7 log.go:172] (0xc005de6d10) (0xc0014db220) Stream removed, broadcasting: 1 I0312 23:55:43.822654 7 log.go:172] (0xc005de6d10) (0xc0014db360) Stream removed, broadcasting: 3 I0312 23:55:43.822661 7 log.go:172] (0xc005de6d10) (0xc0017581e0) Stream removed, broadcasting: 5 Mar 12 23:55:43.822: INFO: Exec stderr: "" Mar 12 23:55:43.822: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2696 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 23:55:43.822: INFO: >>> kubeConfig: /root/.kube/config I0312 23:55:43.843687 7 log.go:172] (0xc005de73f0) (0xc0014db7c0) Create stream I0312 23:55:43.843717 7 log.go:172] (0xc005de73f0) (0xc0014db7c0) Stream added, broadcasting: 1 I0312 23:55:43.845328 7 log.go:172] (0xc005de73f0) Reply frame received for 1 I0312 23:55:43.845355 7 log.go:172] (0xc005de73f0) (0xc001c332c0) Create stream I0312 23:55:43.845364 7 log.go:172] (0xc005de73f0) (0xc001c332c0) Stream added, broadcasting: 3 I0312 23:55:43.846086 7 log.go:172] (0xc005de73f0) Reply frame received for 3 I0312 23:55:43.846108 7 log.go:172] (0xc005de73f0) (0xc001758280) Create stream I0312 23:55:43.846139 7 log.go:172] (0xc005de73f0) (0xc001758280) Stream added, broadcasting: 5 I0312 23:55:43.846871 7 log.go:172] (0xc005de73f0) Reply frame received for 5 I0312 23:55:43.900779 7 log.go:172] (0xc005de73f0) Data frame received for 3 I0312 23:55:43.900800 7 log.go:172] (0xc001c332c0) (3) Data frame handling I0312 23:55:43.900819 7 log.go:172] (0xc001c332c0) (3) Data frame sent I0312 23:55:43.901002 7 log.go:172] (0xc005de73f0) Data frame received for 5 I0312 23:55:43.901032 7 log.go:172] (0xc001758280) (5) Data frame handling I0312 23:55:43.901066 7 log.go:172] (0xc005de73f0) Data frame received for 3 I0312 23:55:43.901110 7 log.go:172] (0xc001c332c0) (3) Data frame handling I0312 23:55:43.901989 7 log.go:172] (0xc005de73f0) Data frame received for 1 I0312 23:55:43.902004 7 log.go:172] (0xc0014db7c0) (1) Data frame handling I0312 23:55:43.902021 7 log.go:172] (0xc0014db7c0) (1) Data frame sent I0312 23:55:43.902037 7 log.go:172] (0xc005de73f0) (0xc0014db7c0) Stream removed, broadcasting: 1 I0312 23:55:43.902054 7 log.go:172] (0xc005de73f0) Go away received I0312 23:55:43.902184 7 log.go:172] (0xc005de73f0) (0xc0014db7c0) Stream removed, broadcasting: 1 I0312 23:55:43.902207 7 log.go:172] (0xc005de73f0) (0xc001c332c0) Stream removed, broadcasting: 3 I0312 23:55:43.902224 7 log.go:172] (0xc005de73f0) (0xc001758280) Stream removed, broadcasting: 5 Mar 12 23:55:43.902: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 12 23:55:43.902: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2696 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 23:55:43.902: INFO: >>> kubeConfig: /root/.kube/config I0312 23:55:43.924515 7 log.go:172] (0xc0030b0210) (0xc001c33540) Create stream I0312 23:55:43.924538 7 log.go:172] (0xc0030b0210) (0xc001c33540) Stream added, broadcasting: 1 I0312 23:55:43.926259 7 log.go:172] (0xc0030b0210) Reply frame received for 1 I0312 23:55:43.926293 7 log.go:172] (0xc0030b0210) (0xc001c33680) Create stream I0312 23:55:43.926306 7 log.go:172] (0xc0030b0210) (0xc001c33680) Stream added, broadcasting: 3 I0312 23:55:43.927019 7 log.go:172] (0xc0030b0210) Reply frame received for 3 I0312 23:55:43.927046 7 log.go:172] (0xc0030b0210) (0xc001c33900) Create stream I0312 23:55:43.927057 7 log.go:172] (0xc0030b0210) (0xc001c33900) Stream added, broadcasting: 5 I0312 23:55:43.927674 7 log.go:172] (0xc0030b0210) Reply frame received for 5 I0312 23:55:43.976711 7 log.go:172] (0xc0030b0210) Data frame received for 3 I0312 23:55:43.976731 7 log.go:172] (0xc001c33680) (3) Data frame handling I0312 23:55:43.976748 7 log.go:172] (0xc001c33680) (3) Data frame sent I0312 23:55:43.976831 7 log.go:172] (0xc0030b0210) Data frame received for 3 I0312 23:55:43.976854 7 log.go:172] (0xc001c33680) (3) Data frame handling I0312 23:55:43.976870 7 log.go:172] (0xc0030b0210) Data frame received for 5 I0312 23:55:43.976875 7 log.go:172] (0xc001c33900) (5) Data frame handling I0312 23:55:43.978946 7 log.go:172] (0xc0030b0210) Data frame received for 1 I0312 23:55:43.978960 7 log.go:172] (0xc001c33540) (1) Data frame handling I0312 23:55:43.978968 7 log.go:172] (0xc001c33540) (1) Data frame sent I0312 23:55:43.978978 7 log.go:172] (0xc0030b0210) (0xc001c33540) Stream removed, broadcasting: 1 I0312 23:55:43.978996 7 log.go:172] (0xc0030b0210) Go away received I0312 23:55:43.979064 7 log.go:172] (0xc0030b0210) (0xc001c33540) Stream removed, broadcasting: 1 I0312 23:55:43.979079 7 log.go:172] (0xc0030b0210) (0xc001c33680) Stream removed, broadcasting: 3 I0312 23:55:43.979086 7 log.go:172] (0xc0030b0210) (0xc001c33900) Stream removed, broadcasting: 5 Mar 12 23:55:43.979: INFO: Exec stderr: "" Mar 12 23:55:43.979: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2696 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 23:55:43.979: INFO: >>> kubeConfig: /root/.kube/config I0312 23:55:43.999084 7 log.go:172] (0xc002dc5760) (0xc001511540) Create stream I0312 23:55:43.999106 7 log.go:172] (0xc002dc5760) (0xc001511540) Stream added, broadcasting: 1 I0312 23:55:44.000700 7 log.go:172] (0xc002dc5760) Reply frame received for 1 I0312 23:55:44.000729 7 log.go:172] (0xc002dc5760) (0xc0024cdcc0) Create stream I0312 23:55:44.000738 7 log.go:172] (0xc002dc5760) (0xc0024cdcc0) Stream added, broadcasting: 3 I0312 23:55:44.001289 7 log.go:172] (0xc002dc5760) Reply frame received for 3 I0312 23:55:44.001314 7 log.go:172] (0xc002dc5760) (0xc001758320) Create stream I0312 23:55:44.001326 7 log.go:172] (0xc002dc5760) (0xc001758320) Stream added, broadcasting: 5 I0312 23:55:44.001890 7 log.go:172] (0xc002dc5760) Reply frame received for 5 I0312 23:55:44.056064 7 log.go:172] (0xc002dc5760) Data frame received for 5 I0312 23:55:44.056089 7 log.go:172] (0xc001758320) (5) Data frame handling I0312 23:55:44.056105 7 log.go:172] (0xc002dc5760) Data frame received for 3 I0312 23:55:44.056113 7 log.go:172] (0xc0024cdcc0) (3) Data frame handling I0312 23:55:44.056122 7 log.go:172] (0xc0024cdcc0) (3) Data frame sent I0312 23:55:44.056129 7 log.go:172] (0xc002dc5760) Data frame received for 3 I0312 23:55:44.056136 7 log.go:172] (0xc0024cdcc0) (3) Data frame handling I0312 23:55:44.057407 7 log.go:172] (0xc002dc5760) Data frame received for 1 I0312 23:55:44.057420 7 log.go:172] (0xc001511540) (1) Data frame handling I0312 23:55:44.057426 7 log.go:172] (0xc001511540) (1) Data frame sent I0312 23:55:44.057506 7 log.go:172] (0xc002dc5760) (0xc001511540) Stream removed, broadcasting: 1 I0312 23:55:44.057554 7 log.go:172] (0xc002dc5760) Go away received I0312 23:55:44.057637 7 log.go:172] (0xc002dc5760) (0xc001511540) Stream removed, broadcasting: 1 I0312 23:55:44.057655 7 log.go:172] (0xc002dc5760) (0xc0024cdcc0) Stream removed, broadcasting: 3 I0312 23:55:44.057666 7 log.go:172] (0xc002dc5760) (0xc001758320) Stream removed, broadcasting: 5 Mar 12 23:55:44.057: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 12 23:55:44.057: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2696 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 23:55:44.057: INFO: >>> kubeConfig: /root/.kube/config I0312 23:55:44.082774 7 log.go:172] (0xc005de7a20) (0xc0014dbae0) Create stream I0312 23:55:44.082803 7 log.go:172] (0xc005de7a20) (0xc0014dbae0) Stream added, broadcasting: 1 I0312 23:55:44.085285 7 log.go:172] (0xc005de7a20) Reply frame received for 1 I0312 23:55:44.085321 7 log.go:172] (0xc005de7a20) (0xc0017583c0) Create stream I0312 23:55:44.085335 7 log.go:172] (0xc005de7a20) (0xc0017583c0) Stream added, broadcasting: 3 I0312 23:55:44.086232 7 log.go:172] (0xc005de7a20) Reply frame received for 3 I0312 23:55:44.086261 7 log.go:172] (0xc005de7a20) (0xc0024cdd60) Create stream I0312 23:55:44.086273 7 log.go:172] (0xc005de7a20) (0xc0024cdd60) Stream added, broadcasting: 5 I0312 23:55:44.087202 7 log.go:172] (0xc005de7a20) Reply frame received for 5 I0312 23:55:44.143761 7 log.go:172] (0xc005de7a20) Data frame received for 5 I0312 23:55:44.143794 7 log.go:172] (0xc0024cdd60) (5) Data frame handling I0312 23:55:44.143823 7 log.go:172] (0xc005de7a20) Data frame received for 3 I0312 23:55:44.143837 7 log.go:172] (0xc0017583c0) (3) Data frame handling I0312 23:55:44.143848 7 log.go:172] (0xc0017583c0) (3) Data frame sent I0312 23:55:44.143861 7 log.go:172] (0xc005de7a20) Data frame received for 3 I0312 23:55:44.143873 7 log.go:172] (0xc0017583c0) (3) Data frame handling I0312 23:55:44.145275 7 log.go:172] (0xc005de7a20) Data frame received for 1 I0312 23:55:44.145317 7 log.go:172] (0xc0014dbae0) (1) Data frame handling I0312 23:55:44.145344 7 log.go:172] (0xc0014dbae0) (1) Data frame sent I0312 23:55:44.145366 7 log.go:172] (0xc005de7a20) (0xc0014dbae0) Stream removed, broadcasting: 1 I0312 23:55:44.145386 7 log.go:172] (0xc005de7a20) Go away received I0312 23:55:44.145502 7 log.go:172] (0xc005de7a20) (0xc0014dbae0) Stream removed, broadcasting: 1 I0312 23:55:44.145526 7 log.go:172] (0xc005de7a20) (0xc0017583c0) Stream removed, broadcasting: 3 I0312 23:55:44.145546 7 log.go:172] (0xc005de7a20) (0xc0024cdd60) Stream removed, broadcasting: 5 Mar 12 23:55:44.145: INFO: Exec stderr: "" Mar 12 23:55:44.145: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2696 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 23:55:44.145: INFO: >>> kubeConfig: /root/.kube/config I0312 23:55:44.167500 7 log.go:172] (0xc003206370) (0xc0012000a0) Create stream I0312 23:55:44.167523 7 log.go:172] (0xc003206370) (0xc0012000a0) Stream added, broadcasting: 1 I0312 23:55:44.169721 7 log.go:172] (0xc003206370) Reply frame received for 1 I0312 23:55:44.169758 7 log.go:172] (0xc003206370) (0xc001200320) Create stream I0312 23:55:44.169771 7 log.go:172] (0xc003206370) (0xc001200320) Stream added, broadcasting: 3 I0312 23:55:44.170712 7 log.go:172] (0xc003206370) Reply frame received for 3 I0312 23:55:44.170745 7 log.go:172] (0xc003206370) (0xc0015115e0) Create stream I0312 23:55:44.170760 7 log.go:172] (0xc003206370) (0xc0015115e0) Stream added, broadcasting: 5 I0312 23:55:44.171591 7 log.go:172] (0xc003206370) Reply frame received for 5 I0312 23:55:44.244419 7 log.go:172] (0xc003206370) Data frame received for 5 I0312 23:55:44.244447 7 log.go:172] (0xc0015115e0) (5) Data frame handling I0312 23:55:44.244465 7 log.go:172] (0xc003206370) Data frame received for 3 I0312 23:55:44.244477 7 log.go:172] (0xc001200320) (3) Data frame handling I0312 23:55:44.244488 7 log.go:172] (0xc001200320) (3) Data frame sent I0312 23:55:44.244495 7 log.go:172] (0xc003206370) Data frame received for 3 I0312 23:55:44.244510 7 log.go:172] (0xc001200320) (3) Data frame handling I0312 23:55:44.245272 7 log.go:172] (0xc003206370) Data frame received for 1 I0312 23:55:44.245291 7 log.go:172] (0xc0012000a0) (1) Data frame handling I0312 23:55:44.245332 7 log.go:172] (0xc0012000a0) (1) Data frame sent I0312 23:55:44.245347 7 log.go:172] (0xc003206370) (0xc0012000a0) Stream removed, broadcasting: 1 I0312 23:55:44.245360 7 log.go:172] (0xc003206370) Go away received I0312 23:55:44.245473 7 log.go:172] (0xc003206370) (0xc0012000a0) Stream removed, broadcasting: 1 I0312 23:55:44.245490 7 log.go:172] (0xc003206370) (0xc001200320) Stream removed, broadcasting: 3 I0312 23:55:44.245497 7 log.go:172] (0xc003206370) (0xc0015115e0) Stream removed, broadcasting: 5 Mar 12 23:55:44.245: INFO: Exec stderr: "" Mar 12 23:55:44.245: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2696 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 23:55:44.245: INFO: >>> kubeConfig: /root/.kube/config I0312 23:55:44.264541 7 log.go:172] (0xc0030b0840) (0xc001c33d60) Create stream I0312 23:55:44.264563 7 log.go:172] (0xc0030b0840) (0xc001c33d60) Stream added, broadcasting: 1 I0312 23:55:44.266150 7 log.go:172] (0xc0030b0840) Reply frame received for 1 I0312 23:55:44.266170 7 log.go:172] (0xc0030b0840) (0xc001758460) Create stream I0312 23:55:44.266188 7 log.go:172] (0xc0030b0840) (0xc001758460) Stream added, broadcasting: 3 I0312 23:55:44.267087 7 log.go:172] (0xc0030b0840) Reply frame received for 3 I0312 23:55:44.267124 7 log.go:172] (0xc0030b0840) (0xc001c33e00) Create stream I0312 23:55:44.267135 7 log.go:172] (0xc0030b0840) (0xc001c33e00) Stream added, broadcasting: 5 I0312 23:55:44.267830 7 log.go:172] (0xc0030b0840) Reply frame received for 5 I0312 23:55:44.352473 7 log.go:172] (0xc0030b0840) Data frame received for 3 I0312 23:55:44.352505 7 log.go:172] (0xc001758460) (3) Data frame handling I0312 23:55:44.352521 7 log.go:172] (0xc001758460) (3) Data frame sent I0312 23:55:44.352527 7 log.go:172] (0xc0030b0840) Data frame received for 3 I0312 23:55:44.352533 7 log.go:172] (0xc001758460) (3) Data frame handling I0312 23:55:44.352549 7 log.go:172] (0xc0030b0840) Data frame received for 5 I0312 23:55:44.352555 7 log.go:172] (0xc001c33e00) (5) Data frame handling I0312 23:55:44.353692 7 log.go:172] (0xc0030b0840) Data frame received for 1 I0312 23:55:44.353715 7 log.go:172] (0xc001c33d60) (1) Data frame handling I0312 23:55:44.353736 7 log.go:172] (0xc001c33d60) (1) Data frame sent I0312 23:55:44.353754 7 log.go:172] (0xc0030b0840) (0xc001c33d60) Stream removed, broadcasting: 1 I0312 23:55:44.353857 7 log.go:172] (0xc0030b0840) Go away received I0312 23:55:44.353943 7 log.go:172] (0xc0030b0840) (0xc001c33d60) Stream removed, broadcasting: 1 I0312 23:55:44.353982 7 log.go:172] (0xc0030b0840) (0xc001758460) Stream removed, broadcasting: 3 I0312 23:55:44.354007 7 log.go:172] (0xc0030b0840) (0xc001c33e00) Stream removed, broadcasting: 5 Mar 12 23:55:44.354: INFO: Exec stderr: "" Mar 12 23:55:44.354: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2696 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 23:55:44.354: INFO: >>> kubeConfig: /root/.kube/config I0312 23:55:44.376806 7 log.go:172] (0xc002800000) (0xc0024cc000) Create stream I0312 23:55:44.376826 7 log.go:172] (0xc002800000) (0xc0024cc000) Stream added, broadcasting: 1 I0312 23:55:44.378655 7 log.go:172] (0xc002800000) Reply frame received for 1 I0312 23:55:44.378681 7 log.go:172] (0xc002800000) (0xc0013603c0) Create stream I0312 23:55:44.378691 7 log.go:172] (0xc002800000) (0xc0013603c0) Stream added, broadcasting: 3 I0312 23:55:44.379377 7 log.go:172] (0xc002800000) Reply frame received for 3 I0312 23:55:44.379404 7 log.go:172] (0xc002800000) (0xc0024cc0a0) Create stream I0312 23:55:44.379415 7 log.go:172] (0xc002800000) (0xc0024cc0a0) Stream added, broadcasting: 5 I0312 23:55:44.380233 7 log.go:172] (0xc002800000) Reply frame received for 5 I0312 23:55:44.457909 7 log.go:172] (0xc002800000) Data frame received for 3 I0312 23:55:44.457971 7 log.go:172] (0xc0013603c0) (3) Data frame handling I0312 23:55:44.458013 7 log.go:172] (0xc002800000) Data frame received for 5 I0312 23:55:44.458060 7 log.go:172] (0xc0024cc0a0) (5) Data frame handling I0312 23:55:44.458091 7 log.go:172] (0xc0013603c0) (3) Data frame sent I0312 23:55:44.458105 7 log.go:172] (0xc002800000) Data frame received for 3 I0312 23:55:44.458145 7 log.go:172] (0xc0013603c0) (3) Data frame handling I0312 23:55:44.459152 7 log.go:172] (0xc002800000) Data frame received for 1 I0312 23:55:44.459169 7 log.go:172] (0xc0024cc000) (1) Data frame handling I0312 23:55:44.459178 7 log.go:172] (0xc0024cc000) (1) Data frame sent I0312 23:55:44.459195 7 log.go:172] (0xc002800000) (0xc0024cc000) Stream removed, broadcasting: 1 I0312 23:55:44.459216 7 log.go:172] (0xc002800000) Go away received I0312 23:55:44.459373 7 log.go:172] (0xc002800000) (0xc0024cc000) Stream removed, broadcasting: 1 I0312 23:55:44.459397 7 log.go:172] (0xc002800000) (0xc0013603c0) Stream removed, broadcasting: 3 I0312 23:55:44.459418 7 log.go:172] (0xc002800000) (0xc0024cc0a0) Stream removed, broadcasting: 5 Mar 12 23:55:44.459: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:55:44.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2696" for this suite. • [SLOW TEST:9.187 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1453,"failed":0} [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:55:44.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:56:07.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2436" for this suite. • [SLOW TEST:23.315 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1453,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:56:07.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-63e3778e-56ba-44f7-9ac0-213689575b84 STEP: Creating a pod to test consume secrets Mar 12 23:56:07.848: INFO: Waiting up to 5m0s for pod "pod-secrets-382f4b29-1b83-4899-885c-618169634f98" in namespace "secrets-7306" to be "Succeeded or Failed" Mar 12 23:56:07.852: INFO: Pod "pod-secrets-382f4b29-1b83-4899-885c-618169634f98": Phase="Pending", Reason="", readiness=false. Elapsed: 3.900748ms Mar 12 23:56:09.856: INFO: Pod "pod-secrets-382f4b29-1b83-4899-885c-618169634f98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007475075s STEP: Saw pod success Mar 12 23:56:09.856: INFO: Pod "pod-secrets-382f4b29-1b83-4899-885c-618169634f98" satisfied condition "Succeeded or Failed" Mar 12 23:56:09.858: INFO: Trying to get logs from node latest-worker pod pod-secrets-382f4b29-1b83-4899-885c-618169634f98 container secret-volume-test: STEP: delete the pod Mar 12 23:56:09.885: INFO: Waiting for pod pod-secrets-382f4b29-1b83-4899-885c-618169634f98 to disappear Mar 12 23:56:09.894: INFO: Pod pod-secrets-382f4b29-1b83-4899-885c-618169634f98 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:56:09.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7306" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1463,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:56:09.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Mar 12 23:56:10.033: INFO: Waiting up to 5m0s for pod "var-expansion-3d0e1078-fac1-4e4c-bf64-e3bc1dd3e659" in namespace "var-expansion-9569" to be "Succeeded or Failed" Mar 12 23:56:10.050: INFO: Pod "var-expansion-3d0e1078-fac1-4e4c-bf64-e3bc1dd3e659": Phase="Pending", Reason="", readiness=false. Elapsed: 16.247088ms Mar 12 23:56:12.054: INFO: Pod "var-expansion-3d0e1078-fac1-4e4c-bf64-e3bc1dd3e659": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020153398s Mar 12 23:56:14.057: INFO: Pod "var-expansion-3d0e1078-fac1-4e4c-bf64-e3bc1dd3e659": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023704031s STEP: Saw pod success Mar 12 23:56:14.057: INFO: Pod "var-expansion-3d0e1078-fac1-4e4c-bf64-e3bc1dd3e659" satisfied condition "Succeeded or Failed" Mar 12 23:56:14.059: INFO: Trying to get logs from node latest-worker pod var-expansion-3d0e1078-fac1-4e4c-bf64-e3bc1dd3e659 container dapi-container: STEP: delete the pod Mar 12 23:56:14.340: INFO: Waiting for pod var-expansion-3d0e1078-fac1-4e4c-bf64-e3bc1dd3e659 to disappear Mar 12 23:56:14.344: INFO: Pod var-expansion-3d0e1078-fac1-4e4c-bf64-e3bc1dd3e659 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:56:14.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9569" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1469,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:56:14.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 12 23:56:14.396: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:56:29.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5705" for this suite. • [SLOW TEST:14.945 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":103,"skipped":1471,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:56:29.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-cecca294-9c1d-455c-bb58-26014ae30cc2 STEP: Creating a pod to test consume secrets Mar 12 23:56:29.352: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9f638222-2ba7-4113-ae61-8961db186425" in namespace "projected-5039" to be "Succeeded or Failed" Mar 12 23:56:29.355: INFO: Pod "pod-projected-secrets-9f638222-2ba7-4113-ae61-8961db186425": Phase="Pending", Reason="", readiness=false. Elapsed: 3.846898ms Mar 12 23:56:31.359: INFO: Pod "pod-projected-secrets-9f638222-2ba7-4113-ae61-8961db186425": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006986585s STEP: Saw pod success Mar 12 23:56:31.359: INFO: Pod "pod-projected-secrets-9f638222-2ba7-4113-ae61-8961db186425" satisfied condition "Succeeded or Failed" Mar 12 23:56:31.378: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-9f638222-2ba7-4113-ae61-8961db186425 container projected-secret-volume-test: STEP: delete the pod Mar 12 23:56:31.408: INFO: Waiting for pod pod-projected-secrets-9f638222-2ba7-4113-ae61-8961db186425 to disappear Mar 12 23:56:31.415: INFO: Pod pod-projected-secrets-9f638222-2ba7-4113-ae61-8961db186425 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:56:31.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5039" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1486,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:56:31.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:56:31.475: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 12 23:56:36.481: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 12 23:56:36.481: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 12 23:56:36.521: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5171 /apis/apps/v1/namespaces/deployment-5171/deployments/test-cleanup-deployment 53113cd3-1cd7-4c49-9c03-dabeb39289da 1215477 1 2020-03-12 23:56:36 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002988088 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 12 23:56:36.579: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-5171 /apis/apps/v1/namespaces/deployment-5171/replicasets/test-cleanup-deployment-577c77b589 a5d78f21-e16a-4aee-bdbc-03ac5f98ab7f 1215479 1 2020-03-12 23:56:36 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 53113cd3-1cd7-4c49-9c03-dabeb39289da 0xc002a0d0f7 0xc002a0d0f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a0d188 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 12 23:56:36.579: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 12 23:56:36.580: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5171 /apis/apps/v1/namespaces/deployment-5171/replicasets/test-cleanup-controller 0774c3a8-0176-48eb-a44d-014c3b125578 1215478 1 2020-03-12 23:56:31 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 53113cd3-1cd7-4c49-9c03-dabeb39289da 0xc002a0cf9f 0xc002a0cfe0}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002a0d068 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 12 23:56:36.645: INFO: Pod "test-cleanup-controller-jkgxr" is available: &Pod{ObjectMeta:{test-cleanup-controller-jkgxr test-cleanup-controller- deployment-5171 /api/v1/namespaces/deployment-5171/pods/test-cleanup-controller-jkgxr b982ba0f-b9ae-4342-8ada-ca8d3a84b74e 1215451 0 2020-03-12 23:56:31 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 0774c3a8-0176-48eb-a44d-014c3b125578 0xc002946137 0xc002946138}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnvxm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnvxm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnvxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:56:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:56:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:56:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:56:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.175,StartTime:2020-03-12 23:56:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 23:56:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b2e937c2ab9f3cb79db8abce90b2772c994d60feffa0d1b3a89b104effb7c19c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.175,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 12 23:56:36.645: INFO: Pod "test-cleanup-deployment-577c77b589-bbx67" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-bbx67 test-cleanup-deployment-577c77b589- deployment-5171 /api/v1/namespaces/deployment-5171/pods/test-cleanup-deployment-577c77b589-bbx67 3ec17752-81a2-4216-be18-a73fd9024ff0 1215485 0 2020-03-12 23:56:36 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 a5d78f21-e16a-4aee-bdbc-03ac5f98ab7f 0xc002946357 0xc002946358}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gnvxm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gnvxm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gnvxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:56:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:56:36.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5171" for this suite. • [SLOW TEST:5.263 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":105,"skipped":1517,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:56:36.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2672 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 12 23:56:36.729: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 12 23:56:36.763: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 12 23:56:38.768: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 23:56:40.768: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 23:56:42.770: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 23:56:44.768: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 23:56:46.766: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 12 23:56:48.767: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 12 23:56:48.772: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 12 23:56:50.777: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 12 23:56:52.775: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 12 23:56:54.776: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 12 23:56:56.776: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 12 23:56:58.777: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 12 23:57:00.815: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.177 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2672 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 23:57:00.815: INFO: >>> kubeConfig: /root/.kube/config I0312 23:57:00.852530 7 log.go:172] (0xc0022a9080) (0xc0024cd5e0) Create stream I0312 23:57:00.852576 7 log.go:172] (0xc0022a9080) (0xc0024cd5e0) Stream added, broadcasting: 1 I0312 23:57:00.854603 7 log.go:172] (0xc0022a9080) Reply frame received for 1 I0312 23:57:00.854639 7 log.go:172] (0xc0022a9080) (0xc001c02fa0) Create stream I0312 23:57:00.854652 7 log.go:172] (0xc0022a9080) (0xc001c02fa0) Stream added, broadcasting: 3 I0312 23:57:00.855494 7 log.go:172] (0xc0022a9080) Reply frame received for 3 I0312 23:57:00.855551 7 log.go:172] (0xc0022a9080) (0xc001b56000) Create stream I0312 23:57:00.855569 7 log.go:172] (0xc0022a9080) (0xc001b56000) Stream added, broadcasting: 5 I0312 23:57:00.856462 7 log.go:172] (0xc0022a9080) Reply frame received for 5 I0312 23:57:01.927225 7 log.go:172] (0xc0022a9080) Data frame received for 5 I0312 23:57:01.927259 7 log.go:172] (0xc001b56000) (5) Data frame handling I0312 23:57:01.927280 7 log.go:172] (0xc0022a9080) Data frame received for 3 I0312 23:57:01.927292 7 log.go:172] (0xc001c02fa0) (3) Data frame handling I0312 23:57:01.927306 7 log.go:172] (0xc001c02fa0) (3) Data frame sent I0312 23:57:01.927319 7 log.go:172] (0xc0022a9080) Data frame received for 3 I0312 23:57:01.927329 7 log.go:172] (0xc001c02fa0) (3) Data frame handling I0312 23:57:01.929012 7 log.go:172] (0xc0022a9080) Data frame received for 1 I0312 23:57:01.929041 7 log.go:172] (0xc0024cd5e0) (1) Data frame handling I0312 23:57:01.929064 7 log.go:172] (0xc0024cd5e0) (1) Data frame sent I0312 23:57:01.929107 7 log.go:172] (0xc0022a9080) (0xc0024cd5e0) Stream removed, broadcasting: 1 I0312 23:57:01.929134 7 log.go:172] (0xc0022a9080) Go away received I0312 23:57:01.929301 7 log.go:172] (0xc0022a9080) (0xc0024cd5e0) Stream removed, broadcasting: 1 I0312 23:57:01.929323 7 log.go:172] (0xc0022a9080) (0xc001c02fa0) Stream removed, broadcasting: 3 I0312 23:57:01.929342 7 log.go:172] (0xc0022a9080) (0xc001b56000) Stream removed, broadcasting: 5 Mar 12 23:57:01.929: INFO: Found all expected endpoints: [netserver-0] Mar 12 23:57:01.932: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.246 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2672 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 12 23:57:01.933: INFO: >>> kubeConfig: /root/.kube/config I0312 23:57:01.968216 7 log.go:172] (0xc003820370) (0xc001b56460) Create stream I0312 23:57:01.968268 7 log.go:172] (0xc003820370) (0xc001b56460) Stream added, broadcasting: 1 I0312 23:57:01.974298 7 log.go:172] (0xc003820370) Reply frame received for 1 I0312 23:57:01.974353 7 log.go:172] (0xc003820370) (0xc001fc0460) Create stream I0312 23:57:01.974375 7 log.go:172] (0xc003820370) (0xc001fc0460) Stream added, broadcasting: 3 I0312 23:57:01.977584 7 log.go:172] (0xc003820370) Reply frame received for 3 I0312 23:57:01.977634 7 log.go:172] (0xc003820370) (0xc001fc05a0) Create stream I0312 23:57:01.977648 7 log.go:172] (0xc003820370) (0xc001fc05a0) Stream added, broadcasting: 5 I0312 23:57:01.978719 7 log.go:172] (0xc003820370) Reply frame received for 5 I0312 23:57:03.047113 7 log.go:172] (0xc003820370) Data frame received for 5 I0312 23:57:03.047145 7 log.go:172] (0xc001fc05a0) (5) Data frame handling I0312 23:57:03.047167 7 log.go:172] (0xc003820370) Data frame received for 3 I0312 23:57:03.047198 7 log.go:172] (0xc001fc0460) (3) Data frame handling I0312 23:57:03.047215 7 log.go:172] (0xc001fc0460) (3) Data frame sent I0312 23:57:03.047234 7 log.go:172] (0xc003820370) Data frame received for 3 I0312 23:57:03.047243 7 log.go:172] (0xc001fc0460) (3) Data frame handling I0312 23:57:03.048680 7 log.go:172] (0xc003820370) Data frame received for 1 I0312 23:57:03.048698 7 log.go:172] (0xc001b56460) (1) Data frame handling I0312 23:57:03.048713 7 log.go:172] (0xc001b56460) (1) Data frame sent I0312 23:57:03.048726 7 log.go:172] (0xc003820370) (0xc001b56460) Stream removed, broadcasting: 1 I0312 23:57:03.048810 7 log.go:172] (0xc003820370) (0xc001b56460) Stream removed, broadcasting: 1 I0312 23:57:03.048824 7 log.go:172] (0xc003820370) (0xc001fc0460) Stream removed, broadcasting: 3 I0312 23:57:03.048845 7 log.go:172] (0xc003820370) (0xc001fc05a0) Stream removed, broadcasting: 5 Mar 12 23:57:03.048: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:57:03.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0312 23:57:03.048961 7 log.go:172] (0xc003820370) Go away received STEP: Destroying namespace "pod-network-test-2672" for this suite. • [SLOW TEST:26.368 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1548,"failed":0} S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:57:03.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 12 23:57:03.136: INFO: Waiting up to 5m0s for pod "downward-api-033098e6-16eb-45f7-8cb7-2381401889b5" in namespace "downward-api-499" to be "Succeeded or Failed" Mar 12 23:57:03.141: INFO: Pod "downward-api-033098e6-16eb-45f7-8cb7-2381401889b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.768836ms Mar 12 23:57:05.145: INFO: Pod "downward-api-033098e6-16eb-45f7-8cb7-2381401889b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008331269s STEP: Saw pod success Mar 12 23:57:05.145: INFO: Pod "downward-api-033098e6-16eb-45f7-8cb7-2381401889b5" satisfied condition "Succeeded or Failed" Mar 12 23:57:05.148: INFO: Trying to get logs from node latest-worker2 pod downward-api-033098e6-16eb-45f7-8cb7-2381401889b5 container dapi-container: STEP: delete the pod Mar 12 23:57:05.195: INFO: Waiting for pod downward-api-033098e6-16eb-45f7-8cb7-2381401889b5 to disappear Mar 12 23:57:05.214: INFO: Pod downward-api-033098e6-16eb-45f7-8cb7-2381401889b5 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:57:05.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-499" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1549,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:57:05.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-6469 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6469 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6469 Mar 12 23:57:05.273: INFO: Found 0 stateful pods, waiting for 1 Mar 12 23:57:15.277: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 12 23:57:15.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6469 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 23:57:15.515: INFO: stderr: "I0312 23:57:15.418234 1922 log.go:172] (0xc000929340) (0xc0009da780) Create stream\nI0312 23:57:15.418277 1922 log.go:172] (0xc000929340) (0xc0009da780) Stream added, broadcasting: 1\nI0312 23:57:15.422093 1922 log.go:172] (0xc000929340) Reply frame received for 1\nI0312 23:57:15.422142 1922 log.go:172] (0xc000929340) (0xc000603540) Create stream\nI0312 23:57:15.422151 1922 log.go:172] (0xc000929340) (0xc000603540) Stream added, broadcasting: 3\nI0312 23:57:15.422966 1922 log.go:172] (0xc000929340) Reply frame received for 3\nI0312 23:57:15.422997 1922 log.go:172] (0xc000929340) (0xc000406960) Create stream\nI0312 23:57:15.423004 1922 log.go:172] (0xc000929340) (0xc000406960) Stream added, broadcasting: 5\nI0312 23:57:15.423786 1922 log.go:172] (0xc000929340) Reply frame received for 5\nI0312 23:57:15.493519 1922 log.go:172] (0xc000929340) Data frame received for 5\nI0312 23:57:15.493542 1922 log.go:172] (0xc000406960) (5) Data frame handling\nI0312 23:57:15.493557 1922 log.go:172] (0xc000406960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:57:15.509162 1922 log.go:172] (0xc000929340) Data frame received for 3\nI0312 23:57:15.509198 1922 log.go:172] (0xc000603540) (3) Data frame handling\nI0312 23:57:15.509235 1922 log.go:172] (0xc000603540) (3) Data frame sent\nI0312 23:57:15.509265 1922 log.go:172] (0xc000929340) Data frame received for 3\nI0312 23:57:15.509282 1922 log.go:172] (0xc000929340) Data frame received for 5\nI0312 23:57:15.509336 1922 log.go:172] (0xc000406960) (5) Data frame handling\nI0312 23:57:15.509361 1922 log.go:172] (0xc000603540) (3) Data frame handling\nI0312 23:57:15.511175 1922 log.go:172] (0xc000929340) Data frame received for 1\nI0312 23:57:15.511195 1922 log.go:172] (0xc0009da780) (1) Data frame handling\nI0312 23:57:15.511205 1922 log.go:172] (0xc0009da780) (1) Data frame sent\nI0312 23:57:15.511221 1922 log.go:172] (0xc000929340) (0xc0009da780) Stream removed, broadcasting: 1\nI0312 23:57:15.511263 1922 log.go:172] (0xc000929340) Go away received\nI0312 23:57:15.511525 1922 log.go:172] (0xc000929340) (0xc0009da780) Stream removed, broadcasting: 1\nI0312 23:57:15.511538 1922 log.go:172] (0xc000929340) (0xc000603540) Stream removed, broadcasting: 3\nI0312 23:57:15.511548 1922 log.go:172] (0xc000929340) (0xc000406960) Stream removed, broadcasting: 5\n" Mar 12 23:57:15.515: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 23:57:15.515: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 23:57:15.518: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 12 23:57:25.524: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 23:57:25.524: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 23:57:25.557: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999501s Mar 12 23:57:26.566: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.976098104s Mar 12 23:57:27.570: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.96774987s Mar 12 23:57:28.574: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.963323673s Mar 12 23:57:29.578: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.959173125s Mar 12 23:57:30.582: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.955069656s Mar 12 23:57:31.586: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.951477504s Mar 12 23:57:32.589: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.947237469s Mar 12 23:57:33.593: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.944078323s Mar 12 23:57:34.603: INFO: Verifying statefulset ss doesn't scale past 1 for another 940.086736ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6469 Mar 12 23:57:35.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6469 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 23:57:35.848: INFO: stderr: "I0312 23:57:35.774573 1942 log.go:172] (0xc0009fa000) (0xc0008114a0) Create stream\nI0312 23:57:35.774632 1942 log.go:172] (0xc0009fa000) (0xc0008114a0) Stream added, broadcasting: 1\nI0312 23:57:35.776450 1942 log.go:172] (0xc0009fa000) Reply frame received for 1\nI0312 23:57:35.776487 1942 log.go:172] (0xc0009fa000) (0xc00041c000) Create stream\nI0312 23:57:35.776503 1942 log.go:172] (0xc0009fa000) (0xc00041c000) Stream added, broadcasting: 3\nI0312 23:57:35.777496 1942 log.go:172] (0xc0009fa000) Reply frame received for 3\nI0312 23:57:35.777539 1942 log.go:172] (0xc0009fa000) (0xc000811680) Create stream\nI0312 23:57:35.777551 1942 log.go:172] (0xc0009fa000) (0xc000811680) Stream added, broadcasting: 5\nI0312 23:57:35.778422 1942 log.go:172] (0xc0009fa000) Reply frame received for 5\nI0312 23:57:35.841656 1942 log.go:172] (0xc0009fa000) Data frame received for 3\nI0312 23:57:35.841680 1942 log.go:172] (0xc00041c000) (3) Data frame handling\nI0312 23:57:35.841690 1942 log.go:172] (0xc00041c000) (3) Data frame sent\nI0312 23:57:35.841918 1942 log.go:172] (0xc0009fa000) Data frame received for 5\nI0312 23:57:35.841953 1942 log.go:172] (0xc000811680) (5) Data frame handling\nI0312 23:57:35.841969 1942 log.go:172] (0xc000811680) (5) Data frame sent\nI0312 23:57:35.841986 1942 log.go:172] (0xc0009fa000) Data frame received for 5\nI0312 23:57:35.842002 1942 log.go:172] (0xc000811680) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 23:57:35.842041 1942 log.go:172] (0xc0009fa000) Data frame received for 3\nI0312 23:57:35.842064 1942 log.go:172] (0xc00041c000) (3) Data frame handling\nI0312 23:57:35.843698 1942 log.go:172] (0xc0009fa000) Data frame received for 1\nI0312 23:57:35.843717 1942 log.go:172] (0xc0008114a0) (1) Data frame handling\nI0312 23:57:35.843736 1942 log.go:172] (0xc0008114a0) (1) Data frame sent\nI0312 23:57:35.843765 1942 log.go:172] (0xc0009fa000) (0xc0008114a0) Stream removed, broadcasting: 1\nI0312 23:57:35.843782 1942 log.go:172] (0xc0009fa000) Go away received\nI0312 23:57:35.844156 1942 log.go:172] (0xc0009fa000) (0xc0008114a0) Stream removed, broadcasting: 1\nI0312 23:57:35.844176 1942 log.go:172] (0xc0009fa000) (0xc00041c000) Stream removed, broadcasting: 3\nI0312 23:57:35.844183 1942 log.go:172] (0xc0009fa000) (0xc000811680) Stream removed, broadcasting: 5\n" Mar 12 23:57:35.848: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 23:57:35.848: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 23:57:35.852: INFO: Found 1 stateful pods, waiting for 3 Mar 12 23:57:45.857: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 12 23:57:45.857: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 12 23:57:45.857: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 12 23:57:45.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6469 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 23:57:46.065: INFO: stderr: "I0312 23:57:45.994526 1964 log.go:172] (0xc000aae000) (0xc000a86000) Create stream\nI0312 23:57:45.994578 1964 log.go:172] (0xc000aae000) (0xc000a86000) Stream added, broadcasting: 1\nI0312 23:57:45.996766 1964 log.go:172] (0xc000aae000) Reply frame received for 1\nI0312 23:57:45.996805 1964 log.go:172] (0xc000aae000) (0xc000a765a0) Create stream\nI0312 23:57:45.996813 1964 log.go:172] (0xc000aae000) (0xc000a765a0) Stream added, broadcasting: 3\nI0312 23:57:45.997617 1964 log.go:172] (0xc000aae000) Reply frame received for 3\nI0312 23:57:45.997643 1964 log.go:172] (0xc000aae000) (0xc000ae0000) Create stream\nI0312 23:57:45.997657 1964 log.go:172] (0xc000aae000) (0xc000ae0000) Stream added, broadcasting: 5\nI0312 23:57:45.998467 1964 log.go:172] (0xc000aae000) Reply frame received for 5\nI0312 23:57:46.060618 1964 log.go:172] (0xc000aae000) Data frame received for 5\nI0312 23:57:46.060647 1964 log.go:172] (0xc000aae000) Data frame received for 3\nI0312 23:57:46.060672 1964 log.go:172] (0xc000a765a0) (3) Data frame handling\nI0312 23:57:46.060684 1964 log.go:172] (0xc000a765a0) (3) Data frame sent\nI0312 23:57:46.060692 1964 log.go:172] (0xc000aae000) Data frame received for 3\nI0312 23:57:46.060701 1964 log.go:172] (0xc000a765a0) (3) Data frame handling\nI0312 23:57:46.060729 1964 log.go:172] (0xc000ae0000) (5) Data frame handling\nI0312 23:57:46.060745 1964 log.go:172] (0xc000ae0000) (5) Data frame sent\nI0312 23:57:46.060753 1964 log.go:172] (0xc000aae000) Data frame received for 5\nI0312 23:57:46.060785 1964 log.go:172] (0xc000ae0000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:57:46.061893 1964 log.go:172] (0xc000aae000) Data frame received for 1\nI0312 23:57:46.061918 1964 log.go:172] (0xc000a86000) (1) Data frame handling\nI0312 23:57:46.061926 1964 log.go:172] (0xc000a86000) (1) Data frame sent\nI0312 23:57:46.061935 1964 log.go:172] (0xc000aae000) (0xc000a86000) Stream removed, broadcasting: 1\nI0312 23:57:46.061974 1964 log.go:172] (0xc000aae000) Go away received\nI0312 23:57:46.062256 1964 log.go:172] (0xc000aae000) (0xc000a86000) Stream removed, broadcasting: 1\nI0312 23:57:46.062275 1964 log.go:172] (0xc000aae000) (0xc000a765a0) Stream removed, broadcasting: 3\nI0312 23:57:46.062282 1964 log.go:172] (0xc000aae000) (0xc000ae0000) Stream removed, broadcasting: 5\n" Mar 12 23:57:46.065: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 23:57:46.065: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 23:57:46.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6469 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 23:57:46.264: INFO: stderr: "I0312 23:57:46.174872 1984 log.go:172] (0xc000a11760) (0xc000b46820) Create stream\nI0312 23:57:46.174928 1984 log.go:172] (0xc000a11760) (0xc000b46820) Stream added, broadcasting: 1\nI0312 23:57:46.179947 1984 log.go:172] (0xc000a11760) Reply frame received for 1\nI0312 23:57:46.179986 1984 log.go:172] (0xc000a11760) (0xc0007e3680) Create stream\nI0312 23:57:46.179999 1984 log.go:172] (0xc000a11760) (0xc0007e3680) Stream added, broadcasting: 3\nI0312 23:57:46.180703 1984 log.go:172] (0xc000a11760) Reply frame received for 3\nI0312 23:57:46.180729 1984 log.go:172] (0xc000a11760) (0xc0005f8aa0) Create stream\nI0312 23:57:46.180741 1984 log.go:172] (0xc000a11760) (0xc0005f8aa0) Stream added, broadcasting: 5\nI0312 23:57:46.181414 1984 log.go:172] (0xc000a11760) Reply frame received for 5\nI0312 23:57:46.247183 1984 log.go:172] (0xc000a11760) Data frame received for 5\nI0312 23:57:46.247199 1984 log.go:172] (0xc0005f8aa0) (5) Data frame handling\nI0312 23:57:46.247210 1984 log.go:172] (0xc0005f8aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:57:46.261288 1984 log.go:172] (0xc000a11760) Data frame received for 3\nI0312 23:57:46.261310 1984 log.go:172] (0xc0007e3680) (3) Data frame handling\nI0312 23:57:46.261316 1984 log.go:172] (0xc0007e3680) (3) Data frame sent\nI0312 23:57:46.261321 1984 log.go:172] (0xc000a11760) Data frame received for 3\nI0312 23:57:46.261325 1984 log.go:172] (0xc0007e3680) (3) Data frame handling\nI0312 23:57:46.261340 1984 log.go:172] (0xc000a11760) Data frame received for 5\nI0312 23:57:46.261344 1984 log.go:172] (0xc0005f8aa0) (5) Data frame handling\nI0312 23:57:46.262433 1984 log.go:172] (0xc000a11760) Data frame received for 1\nI0312 23:57:46.262445 1984 log.go:172] (0xc000b46820) (1) Data frame handling\nI0312 23:57:46.262453 1984 log.go:172] (0xc000b46820) (1) Data frame sent\nI0312 23:57:46.262461 1984 log.go:172] (0xc000a11760) (0xc000b46820) Stream removed, broadcasting: 1\nI0312 23:57:46.262473 1984 log.go:172] (0xc000a11760) Go away received\nI0312 23:57:46.262717 1984 log.go:172] (0xc000a11760) (0xc000b46820) Stream removed, broadcasting: 1\nI0312 23:57:46.262727 1984 log.go:172] (0xc000a11760) (0xc0007e3680) Stream removed, broadcasting: 3\nI0312 23:57:46.262731 1984 log.go:172] (0xc000a11760) (0xc0005f8aa0) Stream removed, broadcasting: 5\n" Mar 12 23:57:46.264: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 23:57:46.264: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 23:57:46.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6469 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 12 23:57:46.439: INFO: stderr: "I0312 23:57:46.344615 2004 log.go:172] (0xc00073a0b0) (0xc0009fe000) Create stream\nI0312 23:57:46.344652 2004 log.go:172] (0xc00073a0b0) (0xc0009fe000) Stream added, broadcasting: 1\nI0312 23:57:46.346195 2004 log.go:172] (0xc00073a0b0) Reply frame received for 1\nI0312 23:57:46.346219 2004 log.go:172] (0xc00073a0b0) (0xc0006ab220) Create stream\nI0312 23:57:46.346226 2004 log.go:172] (0xc00073a0b0) (0xc0006ab220) Stream added, broadcasting: 3\nI0312 23:57:46.346704 2004 log.go:172] (0xc00073a0b0) Reply frame received for 3\nI0312 23:57:46.346723 2004 log.go:172] (0xc00073a0b0) (0xc00031a000) Create stream\nI0312 23:57:46.346731 2004 log.go:172] (0xc00073a0b0) (0xc00031a000) Stream added, broadcasting: 5\nI0312 23:57:46.347184 2004 log.go:172] (0xc00073a0b0) Reply frame received for 5\nI0312 23:57:46.407416 2004 log.go:172] (0xc00073a0b0) Data frame received for 5\nI0312 23:57:46.407436 2004 log.go:172] (0xc00031a000) (5) Data frame handling\nI0312 23:57:46.407450 2004 log.go:172] (0xc00031a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:57:46.435104 2004 log.go:172] (0xc00073a0b0) Data frame received for 3\nI0312 23:57:46.435133 2004 log.go:172] (0xc0006ab220) (3) Data frame handling\nI0312 23:57:46.435151 2004 log.go:172] (0xc0006ab220) (3) Data frame sent\nI0312 23:57:46.435161 2004 log.go:172] (0xc00073a0b0) Data frame received for 3\nI0312 23:57:46.435178 2004 log.go:172] (0xc0006ab220) (3) Data frame handling\nI0312 23:57:46.435219 2004 log.go:172] (0xc00073a0b0) Data frame received for 5\nI0312 23:57:46.435232 2004 log.go:172] (0xc00031a000) (5) Data frame handling\nI0312 23:57:46.436648 2004 log.go:172] (0xc00073a0b0) Data frame received for 1\nI0312 23:57:46.436660 2004 log.go:172] (0xc0009fe000) (1) Data frame handling\nI0312 23:57:46.436670 2004 log.go:172] (0xc0009fe000) (1) Data frame sent\nI0312 23:57:46.436677 2004 log.go:172] (0xc00073a0b0) (0xc0009fe000) Stream removed, broadcasting: 1\nI0312 23:57:46.436690 2004 log.go:172] (0xc00073a0b0) Go away received\nI0312 23:57:46.436933 2004 log.go:172] (0xc00073a0b0) (0xc0009fe000) Stream removed, broadcasting: 1\nI0312 23:57:46.436945 2004 log.go:172] (0xc00073a0b0) (0xc0006ab220) Stream removed, broadcasting: 3\nI0312 23:57:46.436951 2004 log.go:172] (0xc00073a0b0) (0xc00031a000) Stream removed, broadcasting: 5\n" Mar 12 23:57:46.439: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 12 23:57:46.439: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 12 23:57:46.439: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 23:57:46.442: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 12 23:57:56.448: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 12 23:57:56.448: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 12 23:57:56.448: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 12 23:57:56.466: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999426s Mar 12 23:57:57.470: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987341109s Mar 12 23:57:58.475: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983320159s Mar 12 23:57:59.481: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.978653166s Mar 12 23:58:00.484: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.973139771s Mar 12 23:58:01.487: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.969769851s Mar 12 23:58:02.491: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.966783447s Mar 12 23:58:03.495: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962715259s Mar 12 23:58:04.499: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.958392435s Mar 12 23:58:05.503: INFO: Verifying statefulset ss doesn't scale past 3 for another 954.405428ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6469 Mar 12 23:58:06.507: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6469 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 23:58:06.676: INFO: stderr: "I0312 23:58:06.619696 2024 log.go:172] (0xc00053a6e0) (0xc0006372c0) Create stream\nI0312 23:58:06.619740 2024 log.go:172] (0xc00053a6e0) (0xc0006372c0) Stream added, broadcasting: 1\nI0312 23:58:06.621398 2024 log.go:172] (0xc00053a6e0) Reply frame received for 1\nI0312 23:58:06.621426 2024 log.go:172] (0xc00053a6e0) (0xc0008d2000) Create stream\nI0312 23:58:06.621436 2024 log.go:172] (0xc00053a6e0) (0xc0008d2000) Stream added, broadcasting: 3\nI0312 23:58:06.622243 2024 log.go:172] (0xc00053a6e0) Reply frame received for 3\nI0312 23:58:06.622266 2024 log.go:172] (0xc00053a6e0) (0xc0006374a0) Create stream\nI0312 23:58:06.622274 2024 log.go:172] (0xc00053a6e0) (0xc0006374a0) Stream added, broadcasting: 5\nI0312 23:58:06.622894 2024 log.go:172] (0xc00053a6e0) Reply frame received for 5\nI0312 23:58:06.671740 2024 log.go:172] (0xc00053a6e0) Data frame received for 5\nI0312 23:58:06.671774 2024 log.go:172] (0xc0006374a0) (5) Data frame handling\nI0312 23:58:06.671784 2024 log.go:172] (0xc0006374a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 23:58:06.671794 2024 log.go:172] (0xc00053a6e0) Data frame received for 3\nI0312 23:58:06.671801 2024 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0312 23:58:06.671809 2024 log.go:172] (0xc0008d2000) (3) Data frame sent\nI0312 23:58:06.671876 2024 log.go:172] (0xc00053a6e0) Data frame received for 5\nI0312 23:58:06.671890 2024 log.go:172] (0xc0006374a0) (5) Data frame handling\nI0312 23:58:06.672047 2024 log.go:172] (0xc00053a6e0) Data frame received for 3\nI0312 23:58:06.672059 2024 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0312 23:58:06.673031 2024 log.go:172] (0xc00053a6e0) Data frame received for 1\nI0312 23:58:06.673048 2024 log.go:172] (0xc0006372c0) (1) Data frame handling\nI0312 23:58:06.673062 2024 log.go:172] (0xc0006372c0) (1) Data frame sent\nI0312 23:58:06.673074 2024 log.go:172] (0xc00053a6e0) (0xc0006372c0) Stream removed, broadcasting: 1\nI0312 23:58:06.673089 2024 log.go:172] (0xc00053a6e0) Go away received\nI0312 23:58:06.673557 2024 log.go:172] (0xc00053a6e0) (0xc0006372c0) Stream removed, broadcasting: 1\nI0312 23:58:06.673573 2024 log.go:172] (0xc00053a6e0) (0xc0008d2000) Stream removed, broadcasting: 3\nI0312 23:58:06.673579 2024 log.go:172] (0xc00053a6e0) (0xc0006374a0) Stream removed, broadcasting: 5\n" Mar 12 23:58:06.676: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 23:58:06.676: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 23:58:06.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6469 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 23:58:07.032: INFO: stderr: "I0312 23:58:06.771993 2045 log.go:172] (0xc0003c9b80) (0xc0008ea280) Create stream\nI0312 23:58:06.772029 2045 log.go:172] (0xc0003c9b80) (0xc0008ea280) Stream added, broadcasting: 1\nI0312 23:58:06.776551 2045 log.go:172] (0xc0003c9b80) Reply frame received for 1\nI0312 23:58:06.776585 2045 log.go:172] (0xc0003c9b80) (0xc0008ea460) Create stream\nI0312 23:58:06.776597 2045 log.go:172] (0xc0003c9b80) (0xc0008ea460) Stream added, broadcasting: 3\nI0312 23:58:06.777392 2045 log.go:172] (0xc0003c9b80) Reply frame received for 3\nI0312 23:58:06.777416 2045 log.go:172] (0xc0003c9b80) (0xc0005db680) Create stream\nI0312 23:58:06.777423 2045 log.go:172] (0xc0003c9b80) (0xc0005db680) Stream added, broadcasting: 5\nI0312 23:58:06.778047 2045 log.go:172] (0xc0003c9b80) Reply frame received for 5\nI0312 23:58:06.823530 2045 log.go:172] (0xc0003c9b80) Data frame received for 5\nI0312 23:58:06.823554 2045 log.go:172] (0xc0005db680) (5) Data frame handling\nI0312 23:58:06.823565 2045 log.go:172] (0xc0005db680) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 23:58:07.025970 2045 log.go:172] (0xc0003c9b80) Data frame received for 3\nI0312 23:58:07.025995 2045 log.go:172] (0xc0008ea460) (3) Data frame handling\nI0312 23:58:07.026017 2045 log.go:172] (0xc0008ea460) (3) Data frame sent\nI0312 23:58:07.026036 2045 log.go:172] (0xc0003c9b80) Data frame received for 3\nI0312 23:58:07.026043 2045 log.go:172] (0xc0008ea460) (3) Data frame handling\nI0312 23:58:07.026344 2045 log.go:172] (0xc0003c9b80) Data frame received for 5\nI0312 23:58:07.026377 2045 log.go:172] (0xc0005db680) (5) Data frame handling\nI0312 23:58:07.028023 2045 log.go:172] (0xc0003c9b80) Data frame received for 1\nI0312 23:58:07.028050 2045 log.go:172] (0xc0008ea280) (1) Data frame handling\nI0312 23:58:07.028065 2045 log.go:172] (0xc0008ea280) (1) Data frame sent\nI0312 23:58:07.028085 2045 log.go:172] (0xc0003c9b80) (0xc0008ea280) Stream removed, broadcasting: 1\nI0312 23:58:07.028105 2045 log.go:172] (0xc0003c9b80) Go away received\nI0312 23:58:07.028421 2045 log.go:172] (0xc0003c9b80) (0xc0008ea280) Stream removed, broadcasting: 1\nI0312 23:58:07.028445 2045 log.go:172] (0xc0003c9b80) (0xc0008ea460) Stream removed, broadcasting: 3\nI0312 23:58:07.028455 2045 log.go:172] (0xc0003c9b80) (0xc0005db680) Stream removed, broadcasting: 5\n" Mar 12 23:58:07.032: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 23:58:07.032: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 23:58:07.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6469 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 12 23:58:07.255: INFO: stderr: "I0312 23:58:07.195514 2065 log.go:172] (0xc0009f7340) (0xc0009d65a0) Create stream\nI0312 23:58:07.195574 2065 log.go:172] (0xc0009f7340) (0xc0009d65a0) Stream added, broadcasting: 1\nI0312 23:58:07.197667 2065 log.go:172] (0xc0009f7340) Reply frame received for 1\nI0312 23:58:07.197724 2065 log.go:172] (0xc0009f7340) (0xc0009d6640) Create stream\nI0312 23:58:07.197734 2065 log.go:172] (0xc0009f7340) (0xc0009d6640) Stream added, broadcasting: 3\nI0312 23:58:07.198774 2065 log.go:172] (0xc0009f7340) Reply frame received for 3\nI0312 23:58:07.198808 2065 log.go:172] (0xc0009f7340) (0xc0009d66e0) Create stream\nI0312 23:58:07.198820 2065 log.go:172] (0xc0009f7340) (0xc0009d66e0) Stream added, broadcasting: 5\nI0312 23:58:07.199802 2065 log.go:172] (0xc0009f7340) Reply frame received for 5\nI0312 23:58:07.252059 2065 log.go:172] (0xc0009f7340) Data frame received for 3\nI0312 23:58:07.252117 2065 log.go:172] (0xc0009d6640) (3) Data frame handling\nI0312 23:58:07.252128 2065 log.go:172] (0xc0009d6640) (3) Data frame sent\nI0312 23:58:07.252142 2065 log.go:172] (0xc0009f7340) Data frame received for 3\nI0312 23:58:07.252148 2065 log.go:172] (0xc0009d6640) (3) Data frame handling\nI0312 23:58:07.252189 2065 log.go:172] (0xc0009f7340) Data frame received for 5\nI0312 23:58:07.252214 2065 log.go:172] (0xc0009d66e0) (5) Data frame handling\nI0312 23:58:07.252237 2065 log.go:172] (0xc0009d66e0) (5) Data frame sent\nI0312 23:58:07.252250 2065 log.go:172] (0xc0009f7340) Data frame received for 5\nI0312 23:58:07.252261 2065 log.go:172] (0xc0009d66e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 23:58:07.253182 2065 log.go:172] (0xc0009f7340) Data frame received for 1\nI0312 23:58:07.253198 2065 log.go:172] (0xc0009d65a0) (1) Data frame handling\nI0312 23:58:07.253208 2065 log.go:172] (0xc0009d65a0) (1) Data frame sent\nI0312 23:58:07.253215 2065 log.go:172] (0xc0009f7340) (0xc0009d65a0) Stream removed, broadcasting: 1\nI0312 23:58:07.253224 2065 log.go:172] (0xc0009f7340) Go away received\nI0312 23:58:07.253548 2065 log.go:172] (0xc0009f7340) (0xc0009d65a0) Stream removed, broadcasting: 1\nI0312 23:58:07.253562 2065 log.go:172] (0xc0009f7340) (0xc0009d6640) Stream removed, broadcasting: 3\nI0312 23:58:07.253567 2065 log.go:172] (0xc0009f7340) (0xc0009d66e0) Stream removed, broadcasting: 5\n" Mar 12 23:58:07.255: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 12 23:58:07.255: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 12 23:58:07.255: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 12 23:58:17.267: INFO: Deleting all statefulset in ns statefulset-6469 Mar 12 23:58:17.270: INFO: Scaling statefulset ss to 0 Mar 12 23:58:17.276: INFO: Waiting for statefulset status.replicas updated to 0 Mar 12 23:58:17.279: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:58:17.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6469" for this suite. • [SLOW TEST:72.126 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":108,"skipped":1554,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:58:17.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 12 23:58:17.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1afa64d8-f2a6-42ad-9551-1bb207a541e2" in namespace "projected-5144" to be "Succeeded or Failed" Mar 12 23:58:17.424: INFO: Pod "downwardapi-volume-1afa64d8-f2a6-42ad-9551-1bb207a541e2": Phase="Pending", Reason="", readiness=false. Elapsed: 16.328268ms Mar 12 23:58:19.428: INFO: Pod "downwardapi-volume-1afa64d8-f2a6-42ad-9551-1bb207a541e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020087003s Mar 12 23:58:21.465: INFO: Pod "downwardapi-volume-1afa64d8-f2a6-42ad-9551-1bb207a541e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057009722s STEP: Saw pod success Mar 12 23:58:21.465: INFO: Pod "downwardapi-volume-1afa64d8-f2a6-42ad-9551-1bb207a541e2" satisfied condition "Succeeded or Failed" Mar 12 23:58:21.468: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1afa64d8-f2a6-42ad-9551-1bb207a541e2 container client-container: STEP: delete the pod Mar 12 23:58:21.499: INFO: Waiting for pod downwardapi-volume-1afa64d8-f2a6-42ad-9551-1bb207a541e2 to disappear Mar 12 23:58:21.502: INFO: Pod downwardapi-volume-1afa64d8-f2a6-42ad-9551-1bb207a541e2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:58:21.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5144" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1564,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:58:21.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 12 23:58:23.629: INFO: Waiting up to 5m0s for pod "client-envvars-38753660-571f-4487-aabf-4bab00d3935d" in namespace "pods-0" to be "Succeeded or Failed" Mar 12 23:58:23.680: INFO: Pod "client-envvars-38753660-571f-4487-aabf-4bab00d3935d": Phase="Pending", Reason="", readiness=false. Elapsed: 51.242173ms Mar 12 23:58:25.683: INFO: Pod "client-envvars-38753660-571f-4487-aabf-4bab00d3935d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.054499937s STEP: Saw pod success Mar 12 23:58:25.683: INFO: Pod "client-envvars-38753660-571f-4487-aabf-4bab00d3935d" satisfied condition "Succeeded or Failed" Mar 12 23:58:25.685: INFO: Trying to get logs from node latest-worker pod client-envvars-38753660-571f-4487-aabf-4bab00d3935d container env3cont: STEP: delete the pod Mar 12 23:58:25.701: INFO: Waiting for pod client-envvars-38753660-571f-4487-aabf-4bab00d3935d to disappear Mar 12 23:58:25.706: INFO: Pod client-envvars-38753660-571f-4487-aabf-4bab00d3935d no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:58:25.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-0" for this suite. •{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1577,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:58:25.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-5fe43be9-9dcf-4ab0-acb5-56dfb0096450 STEP: Creating configMap with name cm-test-opt-upd-73b2024e-6aa6-491d-8034-87e31afbfec4 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-5fe43be9-9dcf-4ab0-acb5-56dfb0096450 STEP: Updating configmap cm-test-opt-upd-73b2024e-6aa6-491d-8034-87e31afbfec4 STEP: Creating configMap with name cm-test-opt-create-fb1a10de-1c1e-41c8-920e-daba9bbc465c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 12 23:59:58.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7573" for this suite. • [SLOW TEST:93.092 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1595,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 12 23:59:58.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-d66f4789-4e23-4f62-b8f1-7100fd77f4ad in namespace container-probe-8036 Mar 13 00:00:00.897: INFO: Started pod busybox-d66f4789-4e23-4f62-b8f1-7100fd77f4ad in namespace container-probe-8036 STEP: checking the pod's current state and verifying that restartCount is present Mar 13 00:00:00.899: INFO: Initial restart count of pod busybox-d66f4789-4e23-4f62-b8f1-7100fd77f4ad is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:04:01.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8036" for this suite. • [SLOW TEST:242.732 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:04:01.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 13 00:04:01.584: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1354 /api/v1/namespaces/watch-1354/configmaps/e2e-watch-test-configmap-a 1252ae35-d1a8-4b8a-bf02-ee5f5ff5b21d 1217238 0 2020-03-13 00:04:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 13 00:04:01.584: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1354 /api/v1/namespaces/watch-1354/configmaps/e2e-watch-test-configmap-a 1252ae35-d1a8-4b8a-bf02-ee5f5ff5b21d 1217238 0 2020-03-13 00:04:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 13 00:04:11.591: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1354 /api/v1/namespaces/watch-1354/configmaps/e2e-watch-test-configmap-a 1252ae35-d1a8-4b8a-bf02-ee5f5ff5b21d 1217278 0 2020-03-13 00:04:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 13 00:04:11.591: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1354 /api/v1/namespaces/watch-1354/configmaps/e2e-watch-test-configmap-a 1252ae35-d1a8-4b8a-bf02-ee5f5ff5b21d 1217278 0 2020-03-13 00:04:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 13 00:04:21.599: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1354 /api/v1/namespaces/watch-1354/configmaps/e2e-watch-test-configmap-a 1252ae35-d1a8-4b8a-bf02-ee5f5ff5b21d 1217308 0 2020-03-13 00:04:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 13 00:04:21.599: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1354 /api/v1/namespaces/watch-1354/configmaps/e2e-watch-test-configmap-a 1252ae35-d1a8-4b8a-bf02-ee5f5ff5b21d 1217308 0 2020-03-13 00:04:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 13 00:04:31.603: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1354 /api/v1/namespaces/watch-1354/configmaps/e2e-watch-test-configmap-a 1252ae35-d1a8-4b8a-bf02-ee5f5ff5b21d 1217338 0 2020-03-13 00:04:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 13 00:04:31.603: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-1354 /api/v1/namespaces/watch-1354/configmaps/e2e-watch-test-configmap-a 1252ae35-d1a8-4b8a-bf02-ee5f5ff5b21d 1217338 0 2020-03-13 00:04:01 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 13 00:04:41.610: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1354 /api/v1/namespaces/watch-1354/configmaps/e2e-watch-test-configmap-b 7370af9e-aa63-46d8-abfa-87b5785a3d98 1217368 0 2020-03-13 00:04:41 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 13 00:04:41.610: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1354 /api/v1/namespaces/watch-1354/configmaps/e2e-watch-test-configmap-b 7370af9e-aa63-46d8-abfa-87b5785a3d98 1217368 0 2020-03-13 00:04:41 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 13 00:04:51.617: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1354 /api/v1/namespaces/watch-1354/configmaps/e2e-watch-test-configmap-b 7370af9e-aa63-46d8-abfa-87b5785a3d98 1217399 0 2020-03-13 00:04:41 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 13 00:04:51.617: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-1354 /api/v1/namespaces/watch-1354/configmaps/e2e-watch-test-configmap-b 7370af9e-aa63-46d8-abfa-87b5785a3d98 1217399 0 2020-03-13 00:04:41 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:05:01.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1354" for this suite. • [SLOW TEST:60.087 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":113,"skipped":1666,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:05:01.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:05:01.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5566" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":114,"skipped":1683,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:05:01.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 13 00:05:01.803: INFO: Waiting up to 5m0s for pod "pod-54576e18-c34e-4abb-ab59-afb5d491b0dc" in namespace "emptydir-3798" to be "Succeeded or Failed" Mar 13 00:05:01.807: INFO: Pod "pod-54576e18-c34e-4abb-ab59-afb5d491b0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.782425ms Mar 13 00:05:03.811: INFO: Pod "pod-54576e18-c34e-4abb-ab59-afb5d491b0dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007658716s STEP: Saw pod success Mar 13 00:05:03.811: INFO: Pod "pod-54576e18-c34e-4abb-ab59-afb5d491b0dc" satisfied condition "Succeeded or Failed" Mar 13 00:05:03.814: INFO: Trying to get logs from node latest-worker pod pod-54576e18-c34e-4abb-ab59-afb5d491b0dc container test-container: STEP: delete the pod Mar 13 00:05:03.845: INFO: Waiting for pod pod-54576e18-c34e-4abb-ab59-afb5d491b0dc to disappear Mar 13 00:05:03.849: INFO: Pod pod-54576e18-c34e-4abb-ab59-afb5d491b0dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:05:03.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3798" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":1718,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:05:03.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8933.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8933.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8933.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8933.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8933.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8933.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8933.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 232.55.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.55.232_udp@PTR;check="$$(dig +tcp +noall +answer +search 232.55.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.55.232_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8933.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8933.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8933.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8933.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8933.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8933.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8933.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8933.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 232.55.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.55.232_udp@PTR;check="$$(dig +tcp +noall +answer +search 232.55.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.55.232_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 13 00:05:08.014: INFO: Unable to read wheezy_udp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:08.016: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:08.019: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:08.021: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:08.040: INFO: Unable to read jessie_udp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:08.042: INFO: Unable to read jessie_tcp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:08.045: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:08.048: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:08.064: INFO: Lookups using dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14 failed for: [wheezy_udp@dns-test-service.dns-8933.svc.cluster.local wheezy_tcp@dns-test-service.dns-8933.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local jessie_udp@dns-test-service.dns-8933.svc.cluster.local jessie_tcp@dns-test-service.dns-8933.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local] Mar 13 00:05:13.093: INFO: Unable to read wheezy_udp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:13.096: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:13.099: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:13.101: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:13.117: INFO: Unable to read jessie_udp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:13.119: INFO: Unable to read jessie_tcp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:13.121: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:13.123: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:13.142: INFO: Lookups using dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14 failed for: [wheezy_udp@dns-test-service.dns-8933.svc.cluster.local wheezy_tcp@dns-test-service.dns-8933.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local jessie_udp@dns-test-service.dns-8933.svc.cluster.local jessie_tcp@dns-test-service.dns-8933.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local] Mar 13 00:05:18.069: INFO: Unable to read wheezy_udp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:18.072: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:18.074: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:18.077: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:18.094: INFO: Unable to read jessie_udp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:18.097: INFO: Unable to read jessie_tcp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:18.099: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:18.101: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:18.115: INFO: Lookups using dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14 failed for: [wheezy_udp@dns-test-service.dns-8933.svc.cluster.local wheezy_tcp@dns-test-service.dns-8933.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local jessie_udp@dns-test-service.dns-8933.svc.cluster.local jessie_tcp@dns-test-service.dns-8933.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local] Mar 13 00:05:23.067: INFO: Unable to read wheezy_udp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:23.072: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:23.074: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:23.076: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:23.092: INFO: Unable to read jessie_udp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:23.094: INFO: Unable to read jessie_tcp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:23.096: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:23.098: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:23.109: INFO: Lookups using dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14 failed for: [wheezy_udp@dns-test-service.dns-8933.svc.cluster.local wheezy_tcp@dns-test-service.dns-8933.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local jessie_udp@dns-test-service.dns-8933.svc.cluster.local jessie_tcp@dns-test-service.dns-8933.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local] Mar 13 00:05:28.068: INFO: Unable to read wheezy_udp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:28.071: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:28.073: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:28.076: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:28.090: INFO: Unable to read jessie_udp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:28.092: INFO: Unable to read jessie_tcp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:28.094: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:28.096: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:28.108: INFO: Lookups using dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14 failed for: [wheezy_udp@dns-test-service.dns-8933.svc.cluster.local wheezy_tcp@dns-test-service.dns-8933.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local jessie_udp@dns-test-service.dns-8933.svc.cluster.local jessie_tcp@dns-test-service.dns-8933.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local] Mar 13 00:05:33.070: INFO: Unable to read wheezy_udp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:33.078: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:33.081: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:33.083: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:33.100: INFO: Unable to read jessie_udp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:33.103: INFO: Unable to read jessie_tcp@dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:33.105: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:33.107: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local from pod dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14: the server could not find the requested resource (get pods dns-test-825484ea-b506-4c41-abc3-d6c992194c14) Mar 13 00:05:33.120: INFO: Lookups using dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14 failed for: [wheezy_udp@dns-test-service.dns-8933.svc.cluster.local wheezy_tcp@dns-test-service.dns-8933.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local jessie_udp@dns-test-service.dns-8933.svc.cluster.local jessie_tcp@dns-test-service.dns-8933.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8933.svc.cluster.local] Mar 13 00:05:38.113: INFO: DNS probes using dns-8933/dns-test-825484ea-b506-4c41-abc3-d6c992194c14 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:05:38.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8933" for this suite. • [SLOW TEST:34.500 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":116,"skipped":1742,"failed":0} SSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:05:38.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 13 00:05:38.447: INFO: Created pod &Pod{ObjectMeta:{dns-5210 dns-5210 /api/v1/namespaces/dns-5210/pods/dns-5210 49ac7809-69a6-4d13-97fe-28adbc5642a4 1217621 0 2020-03-13 00:05:38 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cflnz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cflnz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cflnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:05:38.448: INFO: The status of Pod dns-5210 is Pending, waiting for it to be Running (with Ready = true) Mar 13 00:05:40.452: INFO: The status of Pod dns-5210 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 13 00:05:40.452: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5210 PodName:dns-5210 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 00:05:40.452: INFO: >>> kubeConfig: /root/.kube/config I0313 00:05:40.473599 7 log.go:172] (0xc002b69d90) (0xc001759540) Create stream I0313 00:05:40.473624 7 log.go:172] (0xc002b69d90) (0xc001759540) Stream added, broadcasting: 1 I0313 00:05:40.475376 7 log.go:172] (0xc002b69d90) Reply frame received for 1 I0313 00:05:40.475402 7 log.go:172] (0xc002b69d90) (0xc001037f40) Create stream I0313 00:05:40.475411 7 log.go:172] (0xc002b69d90) (0xc001037f40) Stream added, broadcasting: 3 I0313 00:05:40.475984 7 log.go:172] (0xc002b69d90) Reply frame received for 3 I0313 00:05:40.476009 7 log.go:172] (0xc002b69d90) (0xc000d163c0) Create stream I0313 00:05:40.476021 7 log.go:172] (0xc002b69d90) (0xc000d163c0) Stream added, broadcasting: 5 I0313 00:05:40.476557 7 log.go:172] (0xc002b69d90) Reply frame received for 5 I0313 00:05:40.541929 7 log.go:172] (0xc002b69d90) Data frame received for 3 I0313 00:05:40.541953 7 log.go:172] (0xc001037f40) (3) Data frame handling I0313 00:05:40.541964 7 log.go:172] (0xc001037f40) (3) Data frame sent I0313 00:05:40.542847 7 log.go:172] (0xc002b69d90) Data frame received for 5 I0313 00:05:40.542866 7 log.go:172] (0xc000d163c0) (5) Data frame handling I0313 00:05:40.543031 7 log.go:172] (0xc002b69d90) Data frame received for 3 I0313 00:05:40.543048 7 log.go:172] (0xc001037f40) (3) Data frame handling I0313 00:05:40.544359 7 log.go:172] (0xc002b69d90) Data frame received for 1 I0313 00:05:40.544373 7 log.go:172] (0xc001759540) (1) Data frame handling I0313 00:05:40.544383 7 log.go:172] (0xc001759540) (1) Data frame sent I0313 00:05:40.544403 7 log.go:172] (0xc002b69d90) (0xc001759540) Stream removed, broadcasting: 1 I0313 00:05:40.544415 7 log.go:172] (0xc002b69d90) Go away received I0313 00:05:40.544562 7 log.go:172] (0xc002b69d90) (0xc001759540) Stream removed, broadcasting: 1 I0313 00:05:40.544587 7 log.go:172] (0xc002b69d90) (0xc001037f40) Stream removed, broadcasting: 3 I0313 00:05:40.544597 7 log.go:172] (0xc002b69d90) (0xc000d163c0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 13 00:05:40.544: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5210 PodName:dns-5210 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 00:05:40.544: INFO: >>> kubeConfig: /root/.kube/config I0313 00:05:40.565843 7 log.go:172] (0xc000dd4630) (0xc000d17180) Create stream I0313 00:05:40.565861 7 log.go:172] (0xc000dd4630) (0xc000d17180) Stream added, broadcasting: 1 I0313 00:05:40.567645 7 log.go:172] (0xc000dd4630) Reply frame received for 1 I0313 00:05:40.567667 7 log.go:172] (0xc000dd4630) (0xc000d4ef00) Create stream I0313 00:05:40.567677 7 log.go:172] (0xc000dd4630) (0xc000d4ef00) Stream added, broadcasting: 3 I0313 00:05:40.568485 7 log.go:172] (0xc000dd4630) Reply frame received for 3 I0313 00:05:40.568534 7 log.go:172] (0xc000dd4630) (0xc000cd6000) Create stream I0313 00:05:40.568548 7 log.go:172] (0xc000dd4630) (0xc000cd6000) Stream added, broadcasting: 5 I0313 00:05:40.569330 7 log.go:172] (0xc000dd4630) Reply frame received for 5 I0313 00:05:40.644906 7 log.go:172] (0xc000dd4630) Data frame received for 3 I0313 00:05:40.644938 7 log.go:172] (0xc000d4ef00) (3) Data frame handling I0313 00:05:40.644956 7 log.go:172] (0xc000d4ef00) (3) Data frame sent I0313 00:05:40.645533 7 log.go:172] (0xc000dd4630) Data frame received for 5 I0313 00:05:40.645587 7 log.go:172] (0xc000cd6000) (5) Data frame handling I0313 00:05:40.645672 7 log.go:172] (0xc000dd4630) Data frame received for 3 I0313 00:05:40.645691 7 log.go:172] (0xc000d4ef00) (3) Data frame handling I0313 00:05:40.647132 7 log.go:172] (0xc000dd4630) Data frame received for 1 I0313 00:05:40.647156 7 log.go:172] (0xc000d17180) (1) Data frame handling I0313 00:05:40.647176 7 log.go:172] (0xc000d17180) (1) Data frame sent I0313 00:05:40.647197 7 log.go:172] (0xc000dd4630) (0xc000d17180) Stream removed, broadcasting: 1 I0313 00:05:40.647218 7 log.go:172] (0xc000dd4630) Go away received I0313 00:05:40.647313 7 log.go:172] (0xc000dd4630) (0xc000d17180) Stream removed, broadcasting: 1 I0313 00:05:40.647335 7 log.go:172] (0xc000dd4630) (0xc000d4ef00) Stream removed, broadcasting: 3 I0313 00:05:40.647346 7 log.go:172] (0xc000dd4630) (0xc000cd6000) Stream removed, broadcasting: 5 Mar 13 00:05:40.647: INFO: Deleting pod dns-5210... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:05:40.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5210" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":117,"skipped":1746,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:05:40.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 13 00:05:40.750: INFO: Waiting up to 5m0s for pod "pod-dd47c6ef-c247-405d-bff8-80a6e0e5116a" in namespace "emptydir-1963" to be "Succeeded or Failed" Mar 13 00:05:40.754: INFO: Pod "pod-dd47c6ef-c247-405d-bff8-80a6e0e5116a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.92764ms Mar 13 00:05:42.757: INFO: Pod "pod-dd47c6ef-c247-405d-bff8-80a6e0e5116a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006454222s Mar 13 00:05:44.761: INFO: Pod "pod-dd47c6ef-c247-405d-bff8-80a6e0e5116a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010810703s STEP: Saw pod success Mar 13 00:05:44.761: INFO: Pod "pod-dd47c6ef-c247-405d-bff8-80a6e0e5116a" satisfied condition "Succeeded or Failed" Mar 13 00:05:44.764: INFO: Trying to get logs from node latest-worker pod pod-dd47c6ef-c247-405d-bff8-80a6e0e5116a container test-container: STEP: delete the pod Mar 13 00:05:44.789: INFO: Waiting for pod pod-dd47c6ef-c247-405d-bff8-80a6e0e5116a to disappear Mar 13 00:05:44.793: INFO: Pod pod-dd47c6ef-c247-405d-bff8-80a6e0e5116a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:05:44.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1963" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":1774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:05:44.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-cdd0296c-db20-4942-b990-83aa81e94384 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:05:44.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2674" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":119,"skipped":1834,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:05:44.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1167 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-1167 I0313 00:05:44.986338 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1167, replica count: 2 I0313 00:05:48.036914 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 13 00:05:48.036: INFO: Creating new exec pod Mar 13 00:05:51.071: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-1167 execpodptwkl -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 13 00:05:53.243: INFO: stderr: "I0313 00:05:53.181024 2082 log.go:172] (0xc000d82630) (0xc000d7e3c0) Create stream\nI0313 00:05:53.181061 2082 log.go:172] (0xc000d82630) (0xc000d7e3c0) Stream added, broadcasting: 1\nI0313 00:05:53.183373 2082 log.go:172] (0xc000d82630) Reply frame received for 1\nI0313 00:05:53.183412 2082 log.go:172] (0xc000d82630) (0xc0007c4000) Create stream\nI0313 00:05:53.183427 2082 log.go:172] (0xc000d82630) (0xc0007c4000) Stream added, broadcasting: 3\nI0313 00:05:53.184094 2082 log.go:172] (0xc000d82630) Reply frame received for 3\nI0313 00:05:53.184117 2082 log.go:172] (0xc000d82630) (0xc0007c40a0) Create stream\nI0313 00:05:53.184124 2082 log.go:172] (0xc000d82630) (0xc0007c40a0) Stream added, broadcasting: 5\nI0313 00:05:53.184838 2082 log.go:172] (0xc000d82630) Reply frame received for 5\nI0313 00:05:53.237014 2082 log.go:172] (0xc000d82630) Data frame received for 5\nI0313 00:05:53.237044 2082 log.go:172] (0xc0007c40a0) (5) Data frame handling\nI0313 00:05:53.237059 2082 log.go:172] (0xc0007c40a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0313 00:05:53.238004 2082 log.go:172] (0xc000d82630) Data frame received for 5\nI0313 00:05:53.238027 2082 log.go:172] (0xc0007c40a0) (5) Data frame handling\nI0313 00:05:53.238037 2082 log.go:172] (0xc0007c40a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0313 00:05:53.238168 2082 log.go:172] (0xc000d82630) Data frame received for 3\nI0313 00:05:53.238180 2082 log.go:172] (0xc0007c4000) (3) Data frame handling\nI0313 00:05:53.238403 2082 log.go:172] (0xc000d82630) Data frame received for 5\nI0313 00:05:53.238413 2082 log.go:172] (0xc0007c40a0) (5) Data frame handling\nI0313 00:05:53.239895 2082 log.go:172] (0xc000d82630) Data frame received for 1\nI0313 00:05:53.239911 2082 log.go:172] (0xc000d7e3c0) (1) Data frame handling\nI0313 00:05:53.239920 2082 log.go:172] (0xc000d7e3c0) (1) Data frame sent\nI0313 00:05:53.239936 2082 log.go:172] (0xc000d82630) (0xc000d7e3c0) Stream removed, broadcasting: 1\nI0313 00:05:53.239950 2082 log.go:172] (0xc000d82630) Go away received\nI0313 00:05:53.240287 2082 log.go:172] (0xc000d82630) (0xc000d7e3c0) Stream removed, broadcasting: 1\nI0313 00:05:53.240300 2082 log.go:172] (0xc000d82630) (0xc0007c4000) Stream removed, broadcasting: 3\nI0313 00:05:53.240306 2082 log.go:172] (0xc000d82630) (0xc0007c40a0) Stream removed, broadcasting: 5\n" Mar 13 00:05:53.243: INFO: stdout: "" Mar 13 00:05:53.243: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-1167 execpodptwkl -- /bin/sh -x -c nc -zv -t -w 2 10.96.115.129 80' Mar 13 00:05:53.388: INFO: stderr: "I0313 00:05:53.330971 2117 log.go:172] (0xc00003ac60) (0xc000689180) Create stream\nI0313 00:05:53.331019 2117 log.go:172] (0xc00003ac60) (0xc000689180) Stream added, broadcasting: 1\nI0313 00:05:53.332406 2117 log.go:172] (0xc00003ac60) Reply frame received for 1\nI0313 00:05:53.332446 2117 log.go:172] (0xc00003ac60) (0xc00031c000) Create stream\nI0313 00:05:53.332454 2117 log.go:172] (0xc00003ac60) (0xc00031c000) Stream added, broadcasting: 3\nI0313 00:05:53.332920 2117 log.go:172] (0xc00003ac60) Reply frame received for 3\nI0313 00:05:53.332957 2117 log.go:172] (0xc00003ac60) (0xc0003c0000) Create stream\nI0313 00:05:53.332965 2117 log.go:172] (0xc00003ac60) (0xc0003c0000) Stream added, broadcasting: 5\nI0313 00:05:53.333736 2117 log.go:172] (0xc00003ac60) Reply frame received for 5\nI0313 00:05:53.384339 2117 log.go:172] (0xc00003ac60) Data frame received for 3\nI0313 00:05:53.384355 2117 log.go:172] (0xc00031c000) (3) Data frame handling\nI0313 00:05:53.384373 2117 log.go:172] (0xc00003ac60) Data frame received for 5\nI0313 00:05:53.384380 2117 log.go:172] (0xc0003c0000) (5) Data frame handling\nI0313 00:05:53.384385 2117 log.go:172] (0xc0003c0000) (5) Data frame sent\nI0313 00:05:53.384390 2117 log.go:172] (0xc00003ac60) Data frame received for 5\nI0313 00:05:53.384397 2117 log.go:172] (0xc0003c0000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.115.129 80\nConnection to 10.96.115.129 80 port [tcp/http] succeeded!\nI0313 00:05:53.385479 2117 log.go:172] (0xc00003ac60) Data frame received for 1\nI0313 00:05:53.385508 2117 log.go:172] (0xc000689180) (1) Data frame handling\nI0313 00:05:53.385521 2117 log.go:172] (0xc000689180) (1) Data frame sent\nI0313 00:05:53.385533 2117 log.go:172] (0xc00003ac60) (0xc000689180) Stream removed, broadcasting: 1\nI0313 00:05:53.385550 2117 log.go:172] (0xc00003ac60) Go away received\nI0313 00:05:53.385778 2117 log.go:172] (0xc00003ac60) (0xc000689180) Stream removed, broadcasting: 1\nI0313 00:05:53.385791 2117 log.go:172] (0xc00003ac60) (0xc00031c000) Stream removed, broadcasting: 3\nI0313 00:05:53.385799 2117 log.go:172] (0xc00003ac60) (0xc0003c0000) Stream removed, broadcasting: 5\n" Mar 13 00:05:53.388: INFO: stdout: "" Mar 13 00:05:53.388: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-1167 execpodptwkl -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.16 31960' Mar 13 00:05:53.544: INFO: stderr: "I0313 00:05:53.476677 2138 log.go:172] (0xc00003ba20) (0xc0007ef220) Create stream\nI0313 00:05:53.476711 2138 log.go:172] (0xc00003ba20) (0xc0007ef220) Stream added, broadcasting: 1\nI0313 00:05:53.478520 2138 log.go:172] (0xc00003ba20) Reply frame received for 1\nI0313 00:05:53.478547 2138 log.go:172] (0xc00003ba20) (0xc0007ef400) Create stream\nI0313 00:05:53.478555 2138 log.go:172] (0xc00003ba20) (0xc0007ef400) Stream added, broadcasting: 3\nI0313 00:05:53.479218 2138 log.go:172] (0xc00003ba20) Reply frame received for 3\nI0313 00:05:53.479257 2138 log.go:172] (0xc00003ba20) (0xc000450000) Create stream\nI0313 00:05:53.479271 2138 log.go:172] (0xc00003ba20) (0xc000450000) Stream added, broadcasting: 5\nI0313 00:05:53.480133 2138 log.go:172] (0xc00003ba20) Reply frame received for 5\nI0313 00:05:53.539670 2138 log.go:172] (0xc00003ba20) Data frame received for 5\nI0313 00:05:53.539697 2138 log.go:172] (0xc000450000) (5) Data frame handling\nI0313 00:05:53.539714 2138 log.go:172] (0xc000450000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.16 31960\nI0313 00:05:53.539811 2138 log.go:172] (0xc00003ba20) Data frame received for 5\nI0313 00:05:53.539833 2138 log.go:172] (0xc000450000) (5) Data frame handling\nI0313 00:05:53.539848 2138 log.go:172] (0xc000450000) (5) Data frame sent\nConnection to 172.17.0.16 31960 port [tcp/31960] succeeded!\nI0313 00:05:53.540278 2138 log.go:172] (0xc00003ba20) Data frame received for 5\nI0313 00:05:53.540298 2138 log.go:172] (0xc00003ba20) Data frame received for 3\nI0313 00:05:53.540317 2138 log.go:172] (0xc0007ef400) (3) Data frame handling\nI0313 00:05:53.540332 2138 log.go:172] (0xc000450000) (5) Data frame handling\nI0313 00:05:53.541336 2138 log.go:172] (0xc00003ba20) Data frame received for 1\nI0313 00:05:53.541352 2138 log.go:172] (0xc0007ef220) (1) Data frame handling\nI0313 00:05:53.541368 2138 log.go:172] (0xc0007ef220) (1) Data frame sent\nI0313 00:05:53.541380 2138 log.go:172] (0xc00003ba20) (0xc0007ef220) Stream removed, broadcasting: 1\nI0313 00:05:53.541397 2138 log.go:172] (0xc00003ba20) Go away received\nI0313 00:05:53.541703 2138 log.go:172] (0xc00003ba20) (0xc0007ef220) Stream removed, broadcasting: 1\nI0313 00:05:53.541718 2138 log.go:172] (0xc00003ba20) (0xc0007ef400) Stream removed, broadcasting: 3\nI0313 00:05:53.541724 2138 log.go:172] (0xc00003ba20) (0xc000450000) Stream removed, broadcasting: 5\n" Mar 13 00:05:53.544: INFO: stdout: "" Mar 13 00:05:53.544: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-1167 execpodptwkl -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31960' Mar 13 00:05:53.708: INFO: stderr: "I0313 00:05:53.633209 2158 log.go:172] (0xc00093ce70) (0xc00083d7c0) Create stream\nI0313 00:05:53.633241 2158 log.go:172] (0xc00093ce70) (0xc00083d7c0) Stream added, broadcasting: 1\nI0313 00:05:53.637322 2158 log.go:172] (0xc00093ce70) Reply frame received for 1\nI0313 00:05:53.637346 2158 log.go:172] (0xc00093ce70) (0xc0005ed680) Create stream\nI0313 00:05:53.637352 2158 log.go:172] (0xc00093ce70) (0xc0005ed680) Stream added, broadcasting: 3\nI0313 00:05:53.638088 2158 log.go:172] (0xc00093ce70) Reply frame received for 3\nI0313 00:05:53.638107 2158 log.go:172] (0xc00093ce70) (0xc00044aaa0) Create stream\nI0313 00:05:53.638135 2158 log.go:172] (0xc00093ce70) (0xc00044aaa0) Stream added, broadcasting: 5\nI0313 00:05:53.638732 2158 log.go:172] (0xc00093ce70) Reply frame received for 5\nI0313 00:05:53.704068 2158 log.go:172] (0xc00093ce70) Data frame received for 3\nI0313 00:05:53.704102 2158 log.go:172] (0xc0005ed680) (3) Data frame handling\nI0313 00:05:53.704123 2158 log.go:172] (0xc00093ce70) Data frame received for 5\nI0313 00:05:53.704133 2158 log.go:172] (0xc00044aaa0) (5) Data frame handling\nI0313 00:05:53.704153 2158 log.go:172] (0xc00044aaa0) (5) Data frame sent\nI0313 00:05:53.704164 2158 log.go:172] (0xc00093ce70) Data frame received for 5\nI0313 00:05:53.704172 2158 log.go:172] (0xc00044aaa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 31960\nConnection to 172.17.0.18 31960 port [tcp/31960] succeeded!\nI0313 00:05:53.705035 2158 log.go:172] (0xc00093ce70) Data frame received for 1\nI0313 00:05:53.705050 2158 log.go:172] (0xc00083d7c0) (1) Data frame handling\nI0313 00:05:53.705063 2158 log.go:172] (0xc00083d7c0) (1) Data frame sent\nI0313 00:05:53.705078 2158 log.go:172] (0xc00093ce70) (0xc00083d7c0) Stream removed, broadcasting: 1\nI0313 00:05:53.705091 2158 log.go:172] (0xc00093ce70) Go away received\nI0313 00:05:53.705397 2158 log.go:172] (0xc00093ce70) (0xc00083d7c0) Stream removed, broadcasting: 1\nI0313 00:05:53.705411 2158 log.go:172] (0xc00093ce70) (0xc0005ed680) Stream removed, broadcasting: 3\nI0313 00:05:53.705417 2158 log.go:172] (0xc00093ce70) (0xc00044aaa0) Stream removed, broadcasting: 5\n" Mar 13 00:05:53.708: INFO: stdout: "" Mar 13 00:05:53.708: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:05:53.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1167" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.877 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":120,"skipped":1847,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:05:53.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-fb7b3d36-ded5-4267-a608-b3e5aa830a33 STEP: Creating a pod to test consume secrets Mar 13 00:05:53.822: INFO: Waiting up to 5m0s for pod "pod-secrets-313e58d8-db10-411d-bd62-daa3462c6223" in namespace "secrets-2531" to be "Succeeded or Failed" Mar 13 00:05:53.826: INFO: Pod "pod-secrets-313e58d8-db10-411d-bd62-daa3462c6223": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110542ms Mar 13 00:05:55.831: INFO: Pod "pod-secrets-313e58d8-db10-411d-bd62-daa3462c6223": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008394101s STEP: Saw pod success Mar 13 00:05:55.831: INFO: Pod "pod-secrets-313e58d8-db10-411d-bd62-daa3462c6223" satisfied condition "Succeeded or Failed" Mar 13 00:05:55.833: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-313e58d8-db10-411d-bd62-daa3462c6223 container secret-volume-test: STEP: delete the pod Mar 13 00:05:55.864: INFO: Waiting for pod pod-secrets-313e58d8-db10-411d-bd62-daa3462c6223 to disappear Mar 13 00:05:55.869: INFO: Pod pod-secrets-313e58d8-db10-411d-bd62-daa3462c6223 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:05:55.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2531" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":1866,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:05:55.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:05:55.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1273" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":122,"skipped":1873,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:05:55.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:05:56.039: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:05:57.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6092" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":123,"skipped":1875,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:05:57.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 13 00:05:57.333: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:05:57.356: INFO: Number of nodes with available pods: 0 Mar 13 00:05:57.356: INFO: Node latest-worker is running more than one daemon pod Mar 13 00:05:58.360: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:05:58.362: INFO: Number of nodes with available pods: 0 Mar 13 00:05:58.362: INFO: Node latest-worker is running more than one daemon pod Mar 13 00:05:59.375: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:05:59.378: INFO: Number of nodes with available pods: 2 Mar 13 00:05:59.378: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 13 00:05:59.393: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:05:59.395: INFO: Number of nodes with available pods: 1 Mar 13 00:05:59.395: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:00.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:00.401: INFO: Number of nodes with available pods: 1 Mar 13 00:06:00.401: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:01.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:01.400: INFO: Number of nodes with available pods: 1 Mar 13 00:06:01.400: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:02.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:02.401: INFO: Number of nodes with available pods: 1 Mar 13 00:06:02.401: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:03.404: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:03.414: INFO: Number of nodes with available pods: 1 Mar 13 00:06:03.414: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:04.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:04.400: INFO: Number of nodes with available pods: 1 Mar 13 00:06:04.400: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:05.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:05.402: INFO: Number of nodes with available pods: 1 Mar 13 00:06:05.403: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:06.405: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:06.408: INFO: Number of nodes with available pods: 1 Mar 13 00:06:06.408: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:07.400: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:07.413: INFO: Number of nodes with available pods: 1 Mar 13 00:06:07.413: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:08.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:08.401: INFO: Number of nodes with available pods: 1 Mar 13 00:06:08.401: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:09.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:09.400: INFO: Number of nodes with available pods: 1 Mar 13 00:06:09.400: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:10.403: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:10.405: INFO: Number of nodes with available pods: 1 Mar 13 00:06:10.405: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:11.399: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:11.402: INFO: Number of nodes with available pods: 1 Mar 13 00:06:11.402: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:12.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:12.401: INFO: Number of nodes with available pods: 1 Mar 13 00:06:12.401: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:06:13.398: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:06:13.400: INFO: Number of nodes with available pods: 2 Mar 13 00:06:13.400: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9337, will wait for the garbage collector to delete the pods Mar 13 00:06:13.485: INFO: Deleting DaemonSet.extensions daemon-set took: 31.373472ms Mar 13 00:06:13.786: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.184021ms Mar 13 00:06:22.188: INFO: Number of nodes with available pods: 0 Mar 13 00:06:22.188: INFO: Number of running nodes: 0, number of available pods: 0 Mar 13 00:06:22.191: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9337/daemonsets","resourceVersion":"1218019"},"items":null} Mar 13 00:06:22.193: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9337/pods","resourceVersion":"1218019"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:06:22.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9337" for this suite. • [SLOW TEST:24.991 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":124,"skipped":1881,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:06:22.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 13 00:06:24.812: INFO: Successfully updated pod "labelsupdatea577fb8b-6de4-4e17-a78b-09ebce7b6c86" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:06:26.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2360" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":1901,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:06:26.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:06:26.905: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:06:31.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1654" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":1960,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:06:31.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 13 00:06:31.105: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e4b883e-2e6e-4738-9014-8843fca08262" in namespace "projected-6097" to be "Succeeded or Failed" Mar 13 00:06:31.110: INFO: Pod "downwardapi-volume-6e4b883e-2e6e-4738-9014-8843fca08262": Phase="Pending", Reason="", readiness=false. Elapsed: 4.352976ms Mar 13 00:06:33.113: INFO: Pod "downwardapi-volume-6e4b883e-2e6e-4738-9014-8843fca08262": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008075988s Mar 13 00:06:35.124: INFO: Pod "downwardapi-volume-6e4b883e-2e6e-4738-9014-8843fca08262": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019040488s STEP: Saw pod success Mar 13 00:06:35.124: INFO: Pod "downwardapi-volume-6e4b883e-2e6e-4738-9014-8843fca08262" satisfied condition "Succeeded or Failed" Mar 13 00:06:35.127: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-6e4b883e-2e6e-4738-9014-8843fca08262 container client-container: STEP: delete the pod Mar 13 00:06:35.175: INFO: Waiting for pod downwardapi-volume-6e4b883e-2e6e-4738-9014-8843fca08262 to disappear Mar 13 00:06:35.181: INFO: Pod downwardapi-volume-6e4b883e-2e6e-4738-9014-8843fca08262 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:06:35.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6097" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":1962,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:06:35.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 13 00:06:37.832: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:06:37.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8804" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2005,"failed":0} SS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:06:37.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2290 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2290;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2290 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2290;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2290.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2290.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2290.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2290.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2290.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2290.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2290.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 155.185.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.185.155_udp@PTR;check="$$(dig +tcp +noall +answer +search 155.185.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.185.155_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2290 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2290;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2290 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2290;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2290.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2290.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2290.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2290.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2290.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2290.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2290.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2290.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 155.185.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.185.155_udp@PTR;check="$$(dig +tcp +noall +answer +search 155.185.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.185.155_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 13 00:06:42.048: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.051: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.053: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.057: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.061: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.063: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.067: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.069: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.088: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.091: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.093: INFO: Unable to read jessie_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.096: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.098: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.101: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.103: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.106: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:42.120: INFO: Lookups using dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2290 wheezy_tcp@dns-test-service.dns-2290 wheezy_udp@dns-test-service.dns-2290.svc wheezy_tcp@dns-test-service.dns-2290.svc wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2290 jessie_tcp@dns-test-service.dns-2290 jessie_udp@dns-test-service.dns-2290.svc jessie_tcp@dns-test-service.dns-2290.svc jessie_udp@_http._tcp.dns-test-service.dns-2290.svc jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc] Mar 13 00:06:47.125: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.129: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.134: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.137: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.140: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.143: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.146: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.148: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.168: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.172: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.175: INFO: Unable to read jessie_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.178: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.181: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.184: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.188: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.190: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:47.206: INFO: Lookups using dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2290 wheezy_tcp@dns-test-service.dns-2290 wheezy_udp@dns-test-service.dns-2290.svc wheezy_tcp@dns-test-service.dns-2290.svc wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2290 jessie_tcp@dns-test-service.dns-2290 jessie_udp@dns-test-service.dns-2290.svc jessie_tcp@dns-test-service.dns-2290.svc jessie_udp@_http._tcp.dns-test-service.dns-2290.svc jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc] Mar 13 00:06:52.131: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.134: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.138: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.141: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.143: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.146: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.149: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.151: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.170: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.172: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.174: INFO: Unable to read jessie_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.177: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.179: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.182: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.184: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.187: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:52.205: INFO: Lookups using dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2290 wheezy_tcp@dns-test-service.dns-2290 wheezy_udp@dns-test-service.dns-2290.svc wheezy_tcp@dns-test-service.dns-2290.svc wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2290 jessie_tcp@dns-test-service.dns-2290 jessie_udp@dns-test-service.dns-2290.svc jessie_tcp@dns-test-service.dns-2290.svc jessie_udp@_http._tcp.dns-test-service.dns-2290.svc jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc] Mar 13 00:06:57.124: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.129: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.132: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.136: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.139: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.141: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.145: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.148: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.168: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.171: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.174: INFO: Unable to read jessie_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.176: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.179: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.181: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.185: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.187: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:06:57.207: INFO: Lookups using dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2290 wheezy_tcp@dns-test-service.dns-2290 wheezy_udp@dns-test-service.dns-2290.svc wheezy_tcp@dns-test-service.dns-2290.svc wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2290 jessie_tcp@dns-test-service.dns-2290 jessie_udp@dns-test-service.dns-2290.svc jessie_tcp@dns-test-service.dns-2290.svc jessie_udp@_http._tcp.dns-test-service.dns-2290.svc jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc] Mar 13 00:07:02.124: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.127: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.130: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.133: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.135: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.137: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.139: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.141: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.158: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.160: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.162: INFO: Unable to read jessie_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.164: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.167: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.169: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.171: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.173: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:02.187: INFO: Lookups using dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2290 wheezy_tcp@dns-test-service.dns-2290 wheezy_udp@dns-test-service.dns-2290.svc wheezy_tcp@dns-test-service.dns-2290.svc wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2290 jessie_tcp@dns-test-service.dns-2290 jessie_udp@dns-test-service.dns-2290.svc jessie_tcp@dns-test-service.dns-2290.svc jessie_udp@_http._tcp.dns-test-service.dns-2290.svc jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc] Mar 13 00:07:07.136: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.140: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.142: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.145: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.148: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.150: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.152: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.155: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.175: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.177: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.180: INFO: Unable to read jessie_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.183: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.185: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.188: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.190: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.193: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6: the server could not find the requested resource (get pods dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6) Mar 13 00:07:07.206: INFO: Lookups using dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2290 wheezy_tcp@dns-test-service.dns-2290 wheezy_udp@dns-test-service.dns-2290.svc wheezy_tcp@dns-test-service.dns-2290.svc wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2290 jessie_tcp@dns-test-service.dns-2290 jessie_udp@dns-test-service.dns-2290.svc jessie_tcp@dns-test-service.dns-2290.svc jessie_udp@_http._tcp.dns-test-service.dns-2290.svc jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc] Mar 13 00:07:12.196: INFO: DNS probes using dns-2290/dns-test-eeb0aa8b-5a39-44f9-82fe-1bfbaee8f5a6 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:07:12.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2290" for this suite. • [SLOW TEST:34.561 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":129,"skipped":2007,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:07:12.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-1a8aae75-e941-4316-b241-88a2cc7bdccf STEP: Creating a pod to test consume configMaps Mar 13 00:07:12.500: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c70357e4-ec3c-43e1-971a-6882a2718fb2" in namespace "projected-7117" to be "Succeeded or Failed" Mar 13 00:07:12.509: INFO: Pod "pod-projected-configmaps-c70357e4-ec3c-43e1-971a-6882a2718fb2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.882084ms Mar 13 00:07:14.512: INFO: Pod "pod-projected-configmaps-c70357e4-ec3c-43e1-971a-6882a2718fb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011720607s STEP: Saw pod success Mar 13 00:07:14.512: INFO: Pod "pod-projected-configmaps-c70357e4-ec3c-43e1-971a-6882a2718fb2" satisfied condition "Succeeded or Failed" Mar 13 00:07:14.513: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-c70357e4-ec3c-43e1-971a-6882a2718fb2 container projected-configmap-volume-test: STEP: delete the pod Mar 13 00:07:14.528: INFO: Waiting for pod pod-projected-configmaps-c70357e4-ec3c-43e1-971a-6882a2718fb2 to disappear Mar 13 00:07:14.532: INFO: Pod pod-projected-configmaps-c70357e4-ec3c-43e1-971a-6882a2718fb2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:07:14.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7117" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2012,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:07:14.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:07:19.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8952" for this suite. • [SLOW TEST:5.505 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":131,"skipped":2018,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:07:20.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-2c5ceb93-3301-4213-9ed9-48a2850094e0 STEP: Creating a pod to test consume secrets Mar 13 00:07:20.130: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-91d6257a-a94c-47e1-8274-e8b8f230448d" in namespace "projected-748" to be "Succeeded or Failed" Mar 13 00:07:20.147: INFO: Pod "pod-projected-secrets-91d6257a-a94c-47e1-8274-e8b8f230448d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.974818ms Mar 13 00:07:22.151: INFO: Pod "pod-projected-secrets-91d6257a-a94c-47e1-8274-e8b8f230448d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020887958s Mar 13 00:07:24.154: INFO: Pod "pod-projected-secrets-91d6257a-a94c-47e1-8274-e8b8f230448d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024755366s STEP: Saw pod success Mar 13 00:07:24.154: INFO: Pod "pod-projected-secrets-91d6257a-a94c-47e1-8274-e8b8f230448d" satisfied condition "Succeeded or Failed" Mar 13 00:07:24.157: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-91d6257a-a94c-47e1-8274-e8b8f230448d container secret-volume-test: STEP: delete the pod Mar 13 00:07:24.172: INFO: Waiting for pod pod-projected-secrets-91d6257a-a94c-47e1-8274-e8b8f230448d to disappear Mar 13 00:07:24.176: INFO: Pod pod-projected-secrets-91d6257a-a94c-47e1-8274-e8b8f230448d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:07:24.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-748" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2035,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:07:24.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 13 00:07:24.244: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 13 00:07:35.382: INFO: >>> kubeConfig: /root/.kube/config Mar 13 00:07:37.379: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:07:48.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2559" for this suite. • [SLOW TEST:24.229 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":133,"skipped":2057,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:07:48.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 13 00:07:48.489: INFO: Waiting up to 5m0s for pod "pod-99b1ed4e-554e-42ee-8c02-e3b14c6ded7f" in namespace "emptydir-75" to be "Succeeded or Failed" Mar 13 00:07:48.508: INFO: Pod "pod-99b1ed4e-554e-42ee-8c02-e3b14c6ded7f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.80717ms Mar 13 00:07:50.512: INFO: Pod "pod-99b1ed4e-554e-42ee-8c02-e3b14c6ded7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022477908s STEP: Saw pod success Mar 13 00:07:50.512: INFO: Pod "pod-99b1ed4e-554e-42ee-8c02-e3b14c6ded7f" satisfied condition "Succeeded or Failed" Mar 13 00:07:50.514: INFO: Trying to get logs from node latest-worker pod pod-99b1ed4e-554e-42ee-8c02-e3b14c6ded7f container test-container: STEP: delete the pod Mar 13 00:07:50.568: INFO: Waiting for pod pod-99b1ed4e-554e-42ee-8c02-e3b14c6ded7f to disappear Mar 13 00:07:50.584: INFO: Pod pod-99b1ed4e-554e-42ee-8c02-e3b14c6ded7f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:07:50.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-75" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":134,"skipped":2110,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:07:50.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5264 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 13 00:07:50.667: INFO: Found 0 stateful pods, waiting for 3 Mar 13 00:08:00.672: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 13 00:08:00.672: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 13 00:08:00.672: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 13 00:08:00.698: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 13 00:08:10.759: INFO: Updating stateful set ss2 Mar 13 00:08:10.798: INFO: Waiting for Pod statefulset-5264/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 13 00:08:20.806: INFO: Waiting for Pod statefulset-5264/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 13 00:08:30.925: INFO: Found 2 stateful pods, waiting for 3 Mar 13 00:08:40.929: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 13 00:08:40.929: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 13 00:08:40.929: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 13 00:08:40.950: INFO: Updating stateful set ss2 Mar 13 00:08:40.983: INFO: Waiting for Pod statefulset-5264/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 13 00:08:51.008: INFO: Updating stateful set ss2 Mar 13 00:08:51.036: INFO: Waiting for StatefulSet statefulset-5264/ss2 to complete update Mar 13 00:08:51.036: INFO: Waiting for Pod statefulset-5264/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 13 00:09:01.043: INFO: Deleting all statefulset in ns statefulset-5264 Mar 13 00:09:01.045: INFO: Scaling statefulset ss2 to 0 Mar 13 00:09:11.077: INFO: Waiting for statefulset status.replicas updated to 0 Mar 13 00:09:11.080: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:09:11.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5264" for this suite. • [SLOW TEST:80.512 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":135,"skipped":2113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:09:11.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:09:11.194: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 13 00:09:13.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3550 create -f -' Mar 13 00:09:16.922: INFO: stderr: "" Mar 13 00:09:16.922: INFO: stdout: "e2e-test-crd-publish-openapi-3702-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 13 00:09:16.922: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3550 delete e2e-test-crd-publish-openapi-3702-crds test-cr' Mar 13 00:09:17.007: INFO: stderr: "" Mar 13 00:09:17.007: INFO: stdout: "e2e-test-crd-publish-openapi-3702-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 13 00:09:17.007: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3550 apply -f -' Mar 13 00:09:17.316: INFO: stderr: "" Mar 13 00:09:17.316: INFO: stdout: "e2e-test-crd-publish-openapi-3702-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 13 00:09:17.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3550 delete e2e-test-crd-publish-openapi-3702-crds test-cr' Mar 13 00:09:17.426: INFO: stderr: "" Mar 13 00:09:17.426: INFO: stdout: "e2e-test-crd-publish-openapi-3702-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 13 00:09:17.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3702-crds' Mar 13 00:09:17.646: INFO: stderr: "" Mar 13 00:09:17.646: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3702-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:09:20.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3550" for this suite. • [SLOW TEST:9.405 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":136,"skipped":2151,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:09:20.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Mar 13 00:09:21.097: INFO: created pod pod-service-account-defaultsa Mar 13 00:09:21.097: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 13 00:09:21.101: INFO: created pod pod-service-account-mountsa Mar 13 00:09:21.101: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 13 00:09:21.158: INFO: created pod pod-service-account-nomountsa Mar 13 00:09:21.158: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 13 00:09:21.177: INFO: created pod pod-service-account-defaultsa-mountspec Mar 13 00:09:21.177: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 13 00:09:21.209: INFO: created pod pod-service-account-mountsa-mountspec Mar 13 00:09:21.209: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 13 00:09:21.247: INFO: created pod pod-service-account-nomountsa-mountspec Mar 13 00:09:21.247: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 13 00:09:21.295: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 13 00:09:21.295: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 13 00:09:21.299: INFO: created pod pod-service-account-mountsa-nomountspec Mar 13 00:09:21.299: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 13 00:09:21.310: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 13 00:09:21.310: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:09:21.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9644" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":137,"skipped":2181,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:09:21.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Mar 13 00:09:21.675: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix504831955/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:09:21.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7408" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":138,"skipped":2220,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:09:21.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 13 00:09:21.916: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9621a44d-265e-438d-81d8-a91d4d6f24dc" in namespace "downward-api-1829" to be "Succeeded or Failed" Mar 13 00:09:21.932: INFO: Pod "downwardapi-volume-9621a44d-265e-438d-81d8-a91d4d6f24dc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.347145ms Mar 13 00:09:23.935: INFO: Pod "downwardapi-volume-9621a44d-265e-438d-81d8-a91d4d6f24dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019461135s Mar 13 00:09:25.938: INFO: Pod "downwardapi-volume-9621a44d-265e-438d-81d8-a91d4d6f24dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021944556s STEP: Saw pod success Mar 13 00:09:25.938: INFO: Pod "downwardapi-volume-9621a44d-265e-438d-81d8-a91d4d6f24dc" satisfied condition "Succeeded or Failed" Mar 13 00:09:25.939: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-9621a44d-265e-438d-81d8-a91d4d6f24dc container client-container: STEP: delete the pod Mar 13 00:09:25.974: INFO: Waiting for pod downwardapi-volume-9621a44d-265e-438d-81d8-a91d4d6f24dc to disappear Mar 13 00:09:25.987: INFO: Pod downwardapi-volume-9621a44d-265e-438d-81d8-a91d4d6f24dc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:09:25.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1829" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2231,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:09:26.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-47beb442-5f7b-4e77-874f-a32635da985f STEP: Creating a pod to test consume secrets Mar 13 00:09:26.102: INFO: Waiting up to 5m0s for pod "pod-secrets-25856596-1e97-4f6b-9f38-c22808dc1f29" in namespace "secrets-5992" to be "Succeeded or Failed" Mar 13 00:09:26.129: INFO: Pod "pod-secrets-25856596-1e97-4f6b-9f38-c22808dc1f29": Phase="Pending", Reason="", readiness=false. Elapsed: 26.454573ms Mar 13 00:09:28.132: INFO: Pod "pod-secrets-25856596-1e97-4f6b-9f38-c22808dc1f29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029631014s STEP: Saw pod success Mar 13 00:09:28.132: INFO: Pod "pod-secrets-25856596-1e97-4f6b-9f38-c22808dc1f29" satisfied condition "Succeeded or Failed" Mar 13 00:09:28.134: INFO: Trying to get logs from node latest-worker pod pod-secrets-25856596-1e97-4f6b-9f38-c22808dc1f29 container secret-volume-test: STEP: delete the pod Mar 13 00:09:28.151: INFO: Waiting for pod pod-secrets-25856596-1e97-4f6b-9f38-c22808dc1f29 to disappear Mar 13 00:09:28.155: INFO: Pod pod-secrets-25856596-1e97-4f6b-9f38-c22808dc1f29 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:09:28.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5992" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2252,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:09:28.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 13 00:09:28.253: INFO: Waiting up to 5m0s for pod "downward-api-bdb9da7e-dca7-4641-a6a3-b339df81e30e" in namespace "downward-api-7708" to be "Succeeded or Failed" Mar 13 00:09:28.272: INFO: Pod "downward-api-bdb9da7e-dca7-4641-a6a3-b339df81e30e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.629564ms Mar 13 00:09:30.275: INFO: Pod "downward-api-bdb9da7e-dca7-4641-a6a3-b339df81e30e": Phase="Running", Reason="", readiness=true. Elapsed: 2.022114745s Mar 13 00:09:32.279: INFO: Pod "downward-api-bdb9da7e-dca7-4641-a6a3-b339df81e30e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026035956s STEP: Saw pod success Mar 13 00:09:32.279: INFO: Pod "downward-api-bdb9da7e-dca7-4641-a6a3-b339df81e30e" satisfied condition "Succeeded or Failed" Mar 13 00:09:32.282: INFO: Trying to get logs from node latest-worker pod downward-api-bdb9da7e-dca7-4641-a6a3-b339df81e30e container dapi-container: STEP: delete the pod Mar 13 00:09:32.299: INFO: Waiting for pod downward-api-bdb9da7e-dca7-4641-a6a3-b339df81e30e to disappear Mar 13 00:09:32.303: INFO: Pod downward-api-bdb9da7e-dca7-4641-a6a3-b339df81e30e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:09:32.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7708" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2323,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:09:32.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 13 00:09:32.377: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63e87f29-b3a6-48ec-87bc-fae6057cdc72" in namespace "projected-542" to be "Succeeded or Failed" Mar 13 00:09:32.398: INFO: Pod "downwardapi-volume-63e87f29-b3a6-48ec-87bc-fae6057cdc72": Phase="Pending", Reason="", readiness=false. Elapsed: 20.981407ms Mar 13 00:09:34.402: INFO: Pod "downwardapi-volume-63e87f29-b3a6-48ec-87bc-fae6057cdc72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024716845s STEP: Saw pod success Mar 13 00:09:34.402: INFO: Pod "downwardapi-volume-63e87f29-b3a6-48ec-87bc-fae6057cdc72" satisfied condition "Succeeded or Failed" Mar 13 00:09:34.405: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-63e87f29-b3a6-48ec-87bc-fae6057cdc72 container client-container: STEP: delete the pod Mar 13 00:09:34.419: INFO: Waiting for pod downwardapi-volume-63e87f29-b3a6-48ec-87bc-fae6057cdc72 to disappear Mar 13 00:09:34.423: INFO: Pod downwardapi-volume-63e87f29-b3a6-48ec-87bc-fae6057cdc72 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:09:34.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-542" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2349,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:09:34.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 13 00:09:34.812: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Mar 13 00:09:36.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719654974, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719654974, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719654974, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719654974, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 13 00:09:39.852: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:09:39.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8831" for this suite. STEP: Destroying namespace "webhook-8831-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.604 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":143,"skipped":2367,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:09:40.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-zvm4b in namespace proxy-882 I0313 00:09:40.193501 7 runners.go:190] Created replication controller with name: proxy-service-zvm4b, namespace: proxy-882, replica count: 1 I0313 00:09:41.243884 7 runners.go:190] proxy-service-zvm4b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0313 00:09:42.244082 7 runners.go:190] proxy-service-zvm4b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0313 00:09:43.244308 7 runners.go:190] proxy-service-zvm4b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0313 00:09:44.244524 7 runners.go:190] proxy-service-zvm4b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0313 00:09:45.244705 7 runners.go:190] proxy-service-zvm4b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0313 00:09:46.245000 7 runners.go:190] proxy-service-zvm4b Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 13 00:09:46.247: INFO: setup took 6.095710116s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 13 00:09:46.252: INFO: (0) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7/proxy/: test (200; 4.616339ms) Mar 13 00:09:46.256: INFO: (0) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname2/proxy/: bar (200; 8.683073ms) Mar 13 00:09:46.256: INFO: (0) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 8.590899ms) Mar 13 00:09:46.256: INFO: (0) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 8.760657ms) Mar 13 00:09:46.257: INFO: (0) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 9.787236ms) Mar 13 00:09:46.258: INFO: (0) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 10.017681ms) Mar 13 00:09:46.258: INFO: (0) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname2/proxy/: bar (200; 10.567211ms) Mar 13 00:09:46.258: INFO: (0) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname1/proxy/: foo (200; 10.582105ms) Mar 13 00:09:46.264: INFO: (0) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:1080/proxy/: t... (200; 15.944449ms) Mar 13 00:09:46.264: INFO: (0) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: testt... (200; 7.085547ms) Mar 13 00:09:46.280: INFO: (1) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 7.234336ms) Mar 13 00:09:46.280: INFO: (1) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7/proxy/: test (200; 7.243765ms) Mar 13 00:09:46.281: INFO: (1) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 7.272436ms) Mar 13 00:09:46.281: INFO: (1) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:460/proxy/: tls baz (200; 7.350233ms) Mar 13 00:09:46.281: INFO: (1) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 7.539972ms) Mar 13 00:09:46.281: INFO: (1) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 7.752957ms) Mar 13 00:09:46.281: INFO: (1) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname1/proxy/: foo (200; 7.767026ms) Mar 13 00:09:46.281: INFO: (1) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:462/proxy/: tls qux (200; 7.703135ms) Mar 13 00:09:46.281: INFO: (1) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: testtest (200; 7.256386ms) Mar 13 00:09:46.291: INFO: (2) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:462/proxy/: tls qux (200; 7.223712ms) Mar 13 00:09:46.291: INFO: (2) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname2/proxy/: bar (200; 7.260877ms) Mar 13 00:09:46.291: INFO: (2) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 7.228514ms) Mar 13 00:09:46.291: INFO: (2) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:1080/proxy/: t... (200; 7.295656ms) Mar 13 00:09:46.291: INFO: (2) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 7.290445ms) Mar 13 00:09:46.291: INFO: (2) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname1/proxy/: foo (200; 7.238264ms) Mar 13 00:09:46.291: INFO: (2) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname2/proxy/: tls qux (200; 7.249642ms) Mar 13 00:09:46.291: INFO: (2) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname1/proxy/: tls baz (200; 7.381439ms) Mar 13 00:09:46.291: INFO: (2) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:460/proxy/: tls baz (200; 7.56365ms) Mar 13 00:09:46.291: INFO: (2) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: testtest (200; 3.747888ms) Mar 13 00:09:46.295: INFO: (3) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 3.999751ms) Mar 13 00:09:46.297: INFO: (3) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 5.422153ms) Mar 13 00:09:46.297: INFO: (3) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname1/proxy/: tls baz (200; 6.029531ms) Mar 13 00:09:46.297: INFO: (3) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: testt... (200; 6.677135ms) Mar 13 00:09:46.298: INFO: (3) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 6.746412ms) Mar 13 00:09:46.298: INFO: (3) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname2/proxy/: tls qux (200; 6.864396ms) Mar 13 00:09:46.298: INFO: (3) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname1/proxy/: foo (200; 6.972141ms) Mar 13 00:09:46.298: INFO: (3) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname2/proxy/: bar (200; 7.054982ms) Mar 13 00:09:46.298: INFO: (3) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:462/proxy/: tls qux (200; 7.141256ms) Mar 13 00:09:46.298: INFO: (3) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:460/proxy/: tls baz (200; 7.143712ms) Mar 13 00:09:46.303: INFO: (4) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 4.538591ms) Mar 13 00:09:46.303: INFO: (4) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 4.987968ms) Mar 13 00:09:46.304: INFO: (4) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7/proxy/: test (200; 5.120953ms) Mar 13 00:09:46.304: INFO: (4) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:1080/proxy/: t... (200; 5.151469ms) Mar 13 00:09:46.304: INFO: (4) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 5.041646ms) Mar 13 00:09:46.304: INFO: (4) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 5.088286ms) Mar 13 00:09:46.304: INFO: (4) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:443/proxy/: testt... (200; 3.64427ms) Mar 13 00:09:46.326: INFO: (5) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:443/proxy/: test (200; 12.959103ms) Mar 13 00:09:46.335: INFO: (5) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:462/proxy/: tls qux (200; 12.995681ms) Mar 13 00:09:46.335: INFO: (5) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname2/proxy/: bar (200; 13.091089ms) Mar 13 00:09:46.335: INFO: (5) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 13.06655ms) Mar 13 00:09:46.335: INFO: (5) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: testt... (200; 3.513068ms) Mar 13 00:09:46.339: INFO: (6) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: testtest (200; 10.807576ms) Mar 13 00:09:46.346: INFO: (6) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 10.743334ms) Mar 13 00:09:46.346: INFO: (6) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 11.106809ms) Mar 13 00:09:46.378: INFO: (6) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname2/proxy/: bar (200; 42.912737ms) Mar 13 00:09:46.378: INFO: (6) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname2/proxy/: bar (200; 42.993793ms) Mar 13 00:09:46.378: INFO: (6) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname1/proxy/: foo (200; 42.934237ms) Mar 13 00:09:46.378: INFO: (6) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname1/proxy/: foo (200; 42.935917ms) Mar 13 00:09:46.378: INFO: (6) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname1/proxy/: tls baz (200; 43.076634ms) Mar 13 00:09:46.381: INFO: (7) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: testt... (200; 6.183096ms) Mar 13 00:09:46.385: INFO: (7) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname2/proxy/: bar (200; 6.167451ms) Mar 13 00:09:46.385: INFO: (7) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 6.25871ms) Mar 13 00:09:46.385: INFO: (7) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 6.191744ms) Mar 13 00:09:46.385: INFO: (7) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 6.228885ms) Mar 13 00:09:46.385: INFO: (7) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7/proxy/: test (200; 6.232375ms) Mar 13 00:09:46.385: INFO: (7) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname1/proxy/: tls baz (200; 6.63644ms) Mar 13 00:09:46.385: INFO: (7) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname2/proxy/: tls qux (200; 6.608315ms) Mar 13 00:09:46.385: INFO: (7) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname1/proxy/: foo (200; 6.724324ms) Mar 13 00:09:46.385: INFO: (7) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:443/proxy/: testtest (200; 5.499478ms) Mar 13 00:09:46.391: INFO: (8) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:443/proxy/: t... (200; 5.767671ms) Mar 13 00:09:46.391: INFO: (8) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 5.825814ms) Mar 13 00:09:46.391: INFO: (8) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname2/proxy/: tls qux (200; 5.999525ms) Mar 13 00:09:46.392: INFO: (8) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname2/proxy/: bar (200; 6.620001ms) Mar 13 00:09:46.392: INFO: (8) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname1/proxy/: tls baz (200; 7.013254ms) Mar 13 00:09:46.392: INFO: (8) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 7.092381ms) Mar 13 00:09:46.392: INFO: (8) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:460/proxy/: tls baz (200; 7.122025ms) Mar 13 00:09:46.392: INFO: (8) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 7.058338ms) Mar 13 00:09:46.392: INFO: (8) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname1/proxy/: foo (200; 7.236043ms) Mar 13 00:09:46.392: INFO: (8) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname1/proxy/: foo (200; 7.275317ms) Mar 13 00:09:46.392: INFO: (8) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname2/proxy/: bar (200; 7.25204ms) Mar 13 00:09:46.392: INFO: (8) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 7.35253ms) Mar 13 00:09:46.395: INFO: (9) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:460/proxy/: tls baz (200; 2.95086ms) Mar 13 00:09:46.398: INFO: (9) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 5.077408ms) Mar 13 00:09:46.399: INFO: (9) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 6.048647ms) Mar 13 00:09:46.399: INFO: (9) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 6.08987ms) Mar 13 00:09:46.399: INFO: (9) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 6.699568ms) Mar 13 00:09:46.399: INFO: (9) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:462/proxy/: tls qux (200; 6.623664ms) Mar 13 00:09:46.399: INFO: (9) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7/proxy/: test (200; 6.606499ms) Mar 13 00:09:46.399: INFO: (9) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: testt... (200; 6.760168ms) Mar 13 00:09:46.399: INFO: (9) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:443/proxy/: test (200; 3.392929ms) Mar 13 00:09:46.405: INFO: (10) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:462/proxy/: tls qux (200; 3.492487ms) Mar 13 00:09:46.405: INFO: (10) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 3.919333ms) Mar 13 00:09:46.405: INFO: (10) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:1080/proxy/: t... (200; 3.909209ms) Mar 13 00:09:46.405: INFO: (10) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: testtesttest (200; 3.427051ms) Mar 13 00:09:46.413: INFO: (11) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 3.725608ms) Mar 13 00:09:46.413: INFO: (11) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 3.729426ms) Mar 13 00:09:46.413: INFO: (11) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:443/proxy/: t... (200; 3.827202ms) Mar 13 00:09:46.414: INFO: (11) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname1/proxy/: foo (200; 5.240532ms) Mar 13 00:09:46.414: INFO: (11) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname2/proxy/: bar (200; 5.269939ms) Mar 13 00:09:46.414: INFO: (11) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname1/proxy/: tls baz (200; 5.36784ms) Mar 13 00:09:46.414: INFO: (11) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname1/proxy/: foo (200; 5.308925ms) Mar 13 00:09:46.414: INFO: (11) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname2/proxy/: bar (200; 5.366905ms) Mar 13 00:09:46.414: INFO: (11) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname2/proxy/: tls qux (200; 5.384518ms) Mar 13 00:09:46.417: INFO: (12) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7/proxy/: test (200; 2.01807ms) Mar 13 00:09:46.419: INFO: (12) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:1080/proxy/: t... (200; 4.154085ms) Mar 13 00:09:46.419: INFO: (12) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 4.232318ms) Mar 13 00:09:46.419: INFO: (12) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 4.181384ms) Mar 13 00:09:46.419: INFO: (12) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 4.238562ms) Mar 13 00:09:46.419: INFO: (12) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:443/proxy/: testtestt... (200; 4.034234ms) Mar 13 00:09:46.425: INFO: (13) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7/proxy/: test (200; 4.075953ms) Mar 13 00:09:46.425: INFO: (13) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 4.097022ms) Mar 13 00:09:46.425: INFO: (13) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 4.284891ms) Mar 13 00:09:46.425: INFO: (13) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname1/proxy/: tls baz (200; 4.397728ms) Mar 13 00:09:46.425: INFO: (13) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:460/proxy/: tls baz (200; 4.530493ms) Mar 13 00:09:46.425: INFO: (13) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:462/proxy/: tls qux (200; 4.776428ms) Mar 13 00:09:46.426: INFO: (13) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname2/proxy/: bar (200; 5.125107ms) Mar 13 00:09:46.426: INFO: (13) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname2/proxy/: bar (200; 5.107199ms) Mar 13 00:09:46.426: INFO: (13) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname2/proxy/: tls qux (200; 5.193235ms) Mar 13 00:09:46.426: INFO: (13) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname1/proxy/: foo (200; 5.787349ms) Mar 13 00:09:46.426: INFO: (13) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname1/proxy/: foo (200; 5.821912ms) Mar 13 00:09:46.430: INFO: (14) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: testtest (200; 3.600296ms) Mar 13 00:09:46.430: INFO: (14) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:443/proxy/: t... (200; 5.205441ms) Mar 13 00:09:46.432: INFO: (14) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname2/proxy/: tls qux (200; 5.827479ms) Mar 13 00:09:46.436: INFO: (15) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 3.682802ms) Mar 13 00:09:46.437: INFO: (15) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 4.773184ms) Mar 13 00:09:46.437: INFO: (15) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:460/proxy/: tls baz (200; 5.106099ms) Mar 13 00:09:46.437: INFO: (15) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:462/proxy/: tls qux (200; 5.058056ms) Mar 13 00:09:46.438: INFO: (15) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 5.167949ms) Mar 13 00:09:46.438: INFO: (15) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname2/proxy/: bar (200; 5.986465ms) Mar 13 00:09:46.439: INFO: (15) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname1/proxy/: tls baz (200; 7.047113ms) Mar 13 00:09:46.439: INFO: (15) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 7.023955ms) Mar 13 00:09:46.439: INFO: (15) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: testtest (200; 7.246648ms) Mar 13 00:09:46.440: INFO: (15) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:443/proxy/: t... (200; 7.228061ms) Mar 13 00:09:46.443: INFO: (16) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:1080/proxy/: t... (200; 3.544721ms) Mar 13 00:09:46.443: INFO: (16) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:460/proxy/: tls baz (200; 3.579982ms) Mar 13 00:09:46.443: INFO: (16) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:462/proxy/: tls qux (200; 3.606513ms) Mar 13 00:09:46.444: INFO: (16) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:443/proxy/: testtest (200; 4.387358ms) Mar 13 00:09:46.446: INFO: (16) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname1/proxy/: foo (200; 5.943521ms) Mar 13 00:09:46.446: INFO: (16) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname1/proxy/: foo (200; 6.19651ms) Mar 13 00:09:46.446: INFO: (16) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname1/proxy/: tls baz (200; 6.338961ms) Mar 13 00:09:46.446: INFO: (16) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname2/proxy/: bar (200; 6.377493ms) Mar 13 00:09:46.446: INFO: (16) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname2/proxy/: bar (200; 6.596456ms) Mar 13 00:09:46.446: INFO: (16) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname2/proxy/: tls qux (200; 6.685734ms) Mar 13 00:09:46.453: INFO: (17) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname1/proxy/: foo (200; 6.773368ms) Mar 13 00:09:46.454: INFO: (17) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 7.46552ms) Mar 13 00:09:46.455: INFO: (17) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 8.102869ms) Mar 13 00:09:46.455: INFO: (17) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 8.122386ms) Mar 13 00:09:46.455: INFO: (17) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:160/proxy/: foo (200; 8.09796ms) Mar 13 00:09:46.455: INFO: (17) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname2/proxy/: tls qux (200; 8.134821ms) Mar 13 00:09:46.455: INFO: (17) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:1080/proxy/: t... (200; 8.207572ms) Mar 13 00:09:46.455: INFO: (17) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname1/proxy/: tls baz (200; 8.214668ms) Mar 13 00:09:46.455: INFO: (17) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname2/proxy/: bar (200; 8.283961ms) Mar 13 00:09:46.455: INFO: (17) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname1/proxy/: foo (200; 8.336038ms) Mar 13 00:09:46.455: INFO: (17) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:462/proxy/: tls qux (200; 8.271051ms) Mar 13 00:09:46.455: INFO: (17) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname2/proxy/: bar (200; 8.320238ms) Mar 13 00:09:46.455: INFO: (17) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: testtest (200; 8.732332ms) Mar 13 00:09:46.461: INFO: (18) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:462/proxy/: tls qux (200; 5.221429ms) Mar 13 00:09:46.461: INFO: (18) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: testt... (200; 5.19588ms) Mar 13 00:09:46.462: INFO: (18) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname1/proxy/: tls baz (200; 6.504569ms) Mar 13 00:09:46.462: INFO: (18) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname2/proxy/: bar (200; 6.551686ms) Mar 13 00:09:46.462: INFO: (18) /api/v1/namespaces/proxy-882/services/http:proxy-service-zvm4b:portname1/proxy/: foo (200; 6.720342ms) Mar 13 00:09:46.462: INFO: (18) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname1/proxy/: foo (200; 6.710801ms) Mar 13 00:09:46.462: INFO: (18) /api/v1/namespaces/proxy-882/services/https:proxy-service-zvm4b:tlsportname2/proxy/: tls qux (200; 6.785122ms) Mar 13 00:09:46.462: INFO: (18) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 6.749517ms) Mar 13 00:09:46.462: INFO: (18) /api/v1/namespaces/proxy-882/services/proxy-service-zvm4b:portname2/proxy/: bar (200; 6.733903ms) Mar 13 00:09:46.462: INFO: (18) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7/proxy/: test (200; 6.905036ms) Mar 13 00:09:46.462: INFO: (18) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:443/proxy/: t... (200; 2.639462ms) Mar 13 00:09:46.465: INFO: (19) /api/v1/namespaces/proxy-882/pods/http:proxy-service-zvm4b-dxmr7:162/proxy/: bar (200; 2.883757ms) Mar 13 00:09:46.465: INFO: (19) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7/proxy/: test (200; 2.991087ms) Mar 13 00:09:46.466: INFO: (19) /api/v1/namespaces/proxy-882/pods/https:proxy-service-zvm4b-dxmr7:460/proxy/: tls baz (200; 2.939803ms) Mar 13 00:09:46.466: INFO: (19) /api/v1/namespaces/proxy-882/pods/proxy-service-zvm4b-dxmr7:1080/proxy/: test>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:09:59.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5914" for this suite. • [SLOW TEST:11.116 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":145,"skipped":2400,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:09:59.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-84bf5038-ecc2-4919-831d-ccfa0f79bcd7 in namespace container-probe-4633 Mar 13 00:10:01.926: INFO: Started pod liveness-84bf5038-ecc2-4919-831d-ccfa0f79bcd7 in namespace container-probe-4633 STEP: checking the pod's current state and verifying that restartCount is present Mar 13 00:10:01.929: INFO: Initial restart count of pod liveness-84bf5038-ecc2-4919-831d-ccfa0f79bcd7 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:14:02.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4633" for this suite. • [SLOW TEST:242.628 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2405,"failed":0} S ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:14:02.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:14:02.534: INFO: (0) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 14.772881ms) Mar 13 00:14:02.550: INFO: (1) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 15.367168ms) Mar 13 00:14:02.553: INFO: (2) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.725686ms) Mar 13 00:14:02.555: INFO: (3) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.6109ms) Mar 13 00:14:02.559: INFO: (4) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.575916ms) Mar 13 00:14:02.562: INFO: (5) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.369976ms) Mar 13 00:14:02.565: INFO: (6) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.518518ms) Mar 13 00:14:02.568: INFO: (7) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.735236ms) Mar 13 00:14:02.570: INFO: (8) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.665225ms) Mar 13 00:14:02.573: INFO: (9) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.689943ms) Mar 13 00:14:02.575: INFO: (10) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.526902ms) Mar 13 00:14:02.578: INFO: (11) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.623545ms) Mar 13 00:14:02.581: INFO: (12) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.755782ms) Mar 13 00:14:02.584: INFO: (13) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.589066ms) Mar 13 00:14:02.586: INFO: (14) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.438722ms) Mar 13 00:14:02.588: INFO: (15) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.362796ms) Mar 13 00:14:02.591: INFO: (16) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.408977ms) Mar 13 00:14:02.593: INFO: (17) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.225771ms) Mar 13 00:14:02.595: INFO: (18) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.347212ms) Mar 13 00:14:02.598: INFO: (19) /api/v1/nodes/latest-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.416762ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:14:02.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4732" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":147,"skipped":2406,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:14:02.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-vwmk STEP: Creating a pod to test atomic-volume-subpath Mar 13 00:14:02.708: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-vwmk" in namespace "subpath-9482" to be "Succeeded or Failed" Mar 13 00:14:02.716: INFO: Pod "pod-subpath-test-secret-vwmk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.30861ms Mar 13 00:14:04.720: INFO: Pod "pod-subpath-test-secret-vwmk": Phase="Running", Reason="", readiness=true. Elapsed: 2.012373855s Mar 13 00:14:06.724: INFO: Pod "pod-subpath-test-secret-vwmk": Phase="Running", Reason="", readiness=true. Elapsed: 4.016498123s Mar 13 00:14:08.728: INFO: Pod "pod-subpath-test-secret-vwmk": Phase="Running", Reason="", readiness=true. Elapsed: 6.020492639s Mar 13 00:14:10.732: INFO: Pod "pod-subpath-test-secret-vwmk": Phase="Running", Reason="", readiness=true. Elapsed: 8.024446638s Mar 13 00:14:12.736: INFO: Pod "pod-subpath-test-secret-vwmk": Phase="Running", Reason="", readiness=true. Elapsed: 10.028231129s Mar 13 00:14:14.740: INFO: Pod "pod-subpath-test-secret-vwmk": Phase="Running", Reason="", readiness=true. Elapsed: 12.032360471s Mar 13 00:14:16.742: INFO: Pod "pod-subpath-test-secret-vwmk": Phase="Running", Reason="", readiness=true. Elapsed: 14.034705962s Mar 13 00:14:18.746: INFO: Pod "pod-subpath-test-secret-vwmk": Phase="Running", Reason="", readiness=true. Elapsed: 16.038436735s Mar 13 00:14:20.750: INFO: Pod "pod-subpath-test-secret-vwmk": Phase="Running", Reason="", readiness=true. Elapsed: 18.04271105s Mar 13 00:14:22.754: INFO: Pod "pod-subpath-test-secret-vwmk": Phase="Running", Reason="", readiness=true. Elapsed: 20.046355671s Mar 13 00:14:24.758: INFO: Pod "pod-subpath-test-secret-vwmk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.050058876s STEP: Saw pod success Mar 13 00:14:24.758: INFO: Pod "pod-subpath-test-secret-vwmk" satisfied condition "Succeeded or Failed" Mar 13 00:14:24.760: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-vwmk container test-container-subpath-secret-vwmk: STEP: delete the pod Mar 13 00:14:24.793: INFO: Waiting for pod pod-subpath-test-secret-vwmk to disappear Mar 13 00:14:24.801: INFO: Pod pod-subpath-test-secret-vwmk no longer exists STEP: Deleting pod pod-subpath-test-secret-vwmk Mar 13 00:14:24.801: INFO: Deleting pod "pod-subpath-test-secret-vwmk" in namespace "subpath-9482" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:14:24.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9482" for this suite. • [SLOW TEST:22.206 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":148,"skipped":2432,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:14:24.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 13 00:14:29.395: INFO: Successfully updated pod "adopt-release-cqjks" STEP: Checking that the Job readopts the Pod Mar 13 00:14:29.395: INFO: Waiting up to 15m0s for pod "adopt-release-cqjks" in namespace "job-5815" to be "adopted" Mar 13 00:14:29.399: INFO: Pod "adopt-release-cqjks": Phase="Running", Reason="", readiness=true. Elapsed: 3.73119ms Mar 13 00:14:31.403: INFO: Pod "adopt-release-cqjks": Phase="Running", Reason="", readiness=true. Elapsed: 2.007261994s Mar 13 00:14:31.403: INFO: Pod "adopt-release-cqjks" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 13 00:14:31.910: INFO: Successfully updated pod "adopt-release-cqjks" STEP: Checking that the Job releases the Pod Mar 13 00:14:31.910: INFO: Waiting up to 15m0s for pod "adopt-release-cqjks" in namespace "job-5815" to be "released" Mar 13 00:14:31.922: INFO: Pod "adopt-release-cqjks": Phase="Running", Reason="", readiness=true. Elapsed: 11.920306ms Mar 13 00:14:31.922: INFO: Pod "adopt-release-cqjks" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:14:31.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5815" for this suite. • [SLOW TEST:7.175 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":149,"skipped":2451,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:14:31.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2491.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2491.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2491.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2491.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2491.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2491.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2491.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2491.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2491.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2491.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 13 00:14:36.127: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:36.129: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:36.141: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:36.143: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:36.148: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:36.152: INFO: Lookups using dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2491.svc.cluster.local] Mar 13 00:14:41.156: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:41.159: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:41.173: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:41.175: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:41.185: INFO: Lookups using dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local] Mar 13 00:14:46.156: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:46.160: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:46.173: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:46.176: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:46.186: INFO: Lookups using dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local] Mar 13 00:14:51.157: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:51.160: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:51.175: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:51.178: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:51.188: INFO: Lookups using dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local] Mar 13 00:14:56.156: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:56.159: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:56.171: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:56.174: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:14:56.183: INFO: Lookups using dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local] Mar 13 00:15:01.157: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:15:01.160: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:15:01.175: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:15:01.178: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local from pod dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3: the server could not find the requested resource (get pods dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3) Mar 13 00:15:01.192: INFO: Lookups using dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2491.svc.cluster.local] Mar 13 00:15:06.209: INFO: DNS probes using dns-2491/dns-test-d9bc97f6-4dd0-4b93-8191-173dbbf340e3 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:15:06.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2491" for this suite. • [SLOW TEST:34.395 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":150,"skipped":2454,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:15:06.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-1eee9cc5-19bd-466e-9bd7-44cb107496a4 STEP: Creating a pod to test consume secrets Mar 13 00:15:06.480: INFO: Waiting up to 5m0s for pod "pod-secrets-c424e94a-d343-403b-b9d5-f3a1d6f92876" in namespace "secrets-1569" to be "Succeeded or Failed" Mar 13 00:15:06.494: INFO: Pod "pod-secrets-c424e94a-d343-403b-b9d5-f3a1d6f92876": Phase="Pending", Reason="", readiness=false. Elapsed: 14.812855ms Mar 13 00:15:08.497: INFO: Pod "pod-secrets-c424e94a-d343-403b-b9d5-f3a1d6f92876": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017152288s STEP: Saw pod success Mar 13 00:15:08.497: INFO: Pod "pod-secrets-c424e94a-d343-403b-b9d5-f3a1d6f92876" satisfied condition "Succeeded or Failed" Mar 13 00:15:08.499: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-c424e94a-d343-403b-b9d5-f3a1d6f92876 container secret-volume-test: STEP: delete the pod Mar 13 00:15:08.521: INFO: Waiting for pod pod-secrets-c424e94a-d343-403b-b9d5-f3a1d6f92876 to disappear Mar 13 00:15:08.526: INFO: Pod pod-secrets-c424e94a-d343-403b-b9d5-f3a1d6f92876 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:15:08.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1569" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2469,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:15:08.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 13 00:15:08.612: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-176 /api/v1/namespaces/watch-176/configmaps/e2e-watch-test-label-changed 2d6e70ab-1871-45b9-a7fa-ae4d9c6f8459 1220754 0 2020-03-13 00:15:08 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 13 00:15:08.612: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-176 /api/v1/namespaces/watch-176/configmaps/e2e-watch-test-label-changed 2d6e70ab-1871-45b9-a7fa-ae4d9c6f8459 1220755 0 2020-03-13 00:15:08 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 13 00:15:08.612: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-176 /api/v1/namespaces/watch-176/configmaps/e2e-watch-test-label-changed 2d6e70ab-1871-45b9-a7fa-ae4d9c6f8459 1220756 0 2020-03-13 00:15:08 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 13 00:15:18.659: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-176 /api/v1/namespaces/watch-176/configmaps/e2e-watch-test-label-changed 2d6e70ab-1871-45b9-a7fa-ae4d9c6f8459 1220823 0 2020-03-13 00:15:08 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 13 00:15:18.659: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-176 /api/v1/namespaces/watch-176/configmaps/e2e-watch-test-label-changed 2d6e70ab-1871-45b9-a7fa-ae4d9c6f8459 1220824 0 2020-03-13 00:15:08 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 13 00:15:18.659: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-176 /api/v1/namespaces/watch-176/configmaps/e2e-watch-test-label-changed 2d6e70ab-1871-45b9-a7fa-ae4d9c6f8459 1220825 0 2020-03-13 00:15:08 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:15:18.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-176" for this suite. • [SLOW TEST:10.135 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":152,"skipped":2489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:15:18.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Mar 13 00:15:18.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3733' Mar 13 00:15:18.981: INFO: stderr: "" Mar 13 00:15:18.981: INFO: stdout: "pod/pause created\n" Mar 13 00:15:18.981: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 13 00:15:18.981: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3733" to be "running and ready" Mar 13 00:15:18.993: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.316801ms Mar 13 00:15:20.997: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.015916158s Mar 13 00:15:20.997: INFO: Pod "pause" satisfied condition "running and ready" Mar 13 00:15:20.997: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Mar 13 00:15:20.997: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3733' Mar 13 00:15:21.096: INFO: stderr: "" Mar 13 00:15:21.096: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 13 00:15:21.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3733' Mar 13 00:15:21.178: INFO: stderr: "" Mar 13 00:15:21.178: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 13 00:15:21.178: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3733' Mar 13 00:15:21.265: INFO: stderr: "" Mar 13 00:15:21.265: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 13 00:15:21.265: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3733' Mar 13 00:15:21.327: INFO: stderr: "" Mar 13 00:15:21.327: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Mar 13 00:15:21.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3733' Mar 13 00:15:21.440: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 13 00:15:21.440: INFO: stdout: "pod \"pause\" force deleted\n" Mar 13 00:15:21.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3733' Mar 13 00:15:21.512: INFO: stderr: "No resources found in kubectl-3733 namespace.\n" Mar 13 00:15:21.513: INFO: stdout: "" Mar 13 00:15:21.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3733 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 13 00:15:21.575: INFO: stderr: "" Mar 13 00:15:21.575: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:15:21.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3733" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":153,"skipped":2519,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:15:21.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 13 00:15:21.618: INFO: >>> kubeConfig: /root/.kube/config Mar 13 00:15:24.371: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:15:34.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2526" for this suite. • [SLOW TEST:12.820 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":154,"skipped":2534,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:15:34.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:15:47.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2634" for this suite. STEP: Destroying namespace "nsdeletetest-2045" for this suite. Mar 13 00:15:47.670: INFO: Namespace nsdeletetest-2045 was already deleted STEP: Destroying namespace "nsdeletetest-6216" for this suite. • [SLOW TEST:13.276 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":155,"skipped":2535,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:15:47.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0313 00:16:27.766801 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 13 00:16:27.766: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:16:27.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8407" for this suite. • [SLOW TEST:40.098 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":156,"skipped":2548,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:16:27.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 13 00:16:27.824: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7eccb63a-107e-4eeb-af8f-290ace986648" in namespace "downward-api-8678" to be "Succeeded or Failed" Mar 13 00:16:27.851: INFO: Pod "downwardapi-volume-7eccb63a-107e-4eeb-af8f-290ace986648": Phase="Pending", Reason="", readiness=false. Elapsed: 26.745446ms Mar 13 00:16:29.855: INFO: Pod "downwardapi-volume-7eccb63a-107e-4eeb-af8f-290ace986648": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.031019892s STEP: Saw pod success Mar 13 00:16:29.855: INFO: Pod "downwardapi-volume-7eccb63a-107e-4eeb-af8f-290ace986648" satisfied condition "Succeeded or Failed" Mar 13 00:16:29.858: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-7eccb63a-107e-4eeb-af8f-290ace986648 container client-container: STEP: delete the pod Mar 13 00:16:29.890: INFO: Waiting for pod downwardapi-volume-7eccb63a-107e-4eeb-af8f-290ace986648 to disappear Mar 13 00:16:29.905: INFO: Pod downwardapi-volume-7eccb63a-107e-4eeb-af8f-290ace986648 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:16:29.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8678" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2556,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:16:29.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 13 00:16:29.981: INFO: Waiting up to 5m0s for pod "pod-0d59ccb8-6379-4422-9f13-9213eb547c86" in namespace "emptydir-8277" to be "Succeeded or Failed" Mar 13 00:16:30.000: INFO: Pod "pod-0d59ccb8-6379-4422-9f13-9213eb547c86": Phase="Pending", Reason="", readiness=false. Elapsed: 19.074485ms Mar 13 00:16:32.004: INFO: Pod "pod-0d59ccb8-6379-4422-9f13-9213eb547c86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022718487s STEP: Saw pod success Mar 13 00:16:32.004: INFO: Pod "pod-0d59ccb8-6379-4422-9f13-9213eb547c86" satisfied condition "Succeeded or Failed" Mar 13 00:16:32.007: INFO: Trying to get logs from node latest-worker pod pod-0d59ccb8-6379-4422-9f13-9213eb547c86 container test-container: STEP: delete the pod Mar 13 00:16:32.045: INFO: Waiting for pod pod-0d59ccb8-6379-4422-9f13-9213eb547c86 to disappear Mar 13 00:16:32.055: INFO: Pod pod-0d59ccb8-6379-4422-9f13-9213eb547c86 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:16:32.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8277" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2574,"failed":0} SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:16:32.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:17:32.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7897" for this suite. • [SLOW TEST:60.143 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":159,"skipped":2576,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:17:32.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-3a8d05c9-b96a-402b-bc5c-5a48bdeba403 STEP: Creating a pod to test consume secrets Mar 13 00:17:32.314: INFO: Waiting up to 5m0s for pod "pod-secrets-8baa367c-1551-49c0-b102-8574987ebcb0" in namespace "secrets-2184" to be "Succeeded or Failed" Mar 13 00:17:32.319: INFO: Pod "pod-secrets-8baa367c-1551-49c0-b102-8574987ebcb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.283644ms Mar 13 00:17:34.323: INFO: Pod "pod-secrets-8baa367c-1551-49c0-b102-8574987ebcb0": Phase="Running", Reason="", readiness=true. Elapsed: 2.00872753s Mar 13 00:17:36.331: INFO: Pod "pod-secrets-8baa367c-1551-49c0-b102-8574987ebcb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016773829s STEP: Saw pod success Mar 13 00:17:36.331: INFO: Pod "pod-secrets-8baa367c-1551-49c0-b102-8574987ebcb0" satisfied condition "Succeeded or Failed" Mar 13 00:17:36.367: INFO: Trying to get logs from node latest-worker pod pod-secrets-8baa367c-1551-49c0-b102-8574987ebcb0 container secret-volume-test: STEP: delete the pod Mar 13 00:17:36.381: INFO: Waiting for pod pod-secrets-8baa367c-1551-49c0-b102-8574987ebcb0 to disappear Mar 13 00:17:36.386: INFO: Pod pod-secrets-8baa367c-1551-49c0-b102-8574987ebcb0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:17:36.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2184" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":160,"skipped":2609,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:17:36.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 13 00:17:36.436: INFO: >>> kubeConfig: /root/.kube/config Mar 13 00:17:38.211: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:17:48.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3106" for this suite. • [SLOW TEST:11.755 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":161,"skipped":2615,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:17:48.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 13 00:17:48.233: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d13b1a9-a658-4bf8-85cd-12b3f9cdeadc" in namespace "projected-9117" to be "Succeeded or Failed" Mar 13 00:17:48.269: INFO: Pod "downwardapi-volume-8d13b1a9-a658-4bf8-85cd-12b3f9cdeadc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.678328ms Mar 13 00:17:50.273: INFO: Pod "downwardapi-volume-8d13b1a9-a658-4bf8-85cd-12b3f9cdeadc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039951717s STEP: Saw pod success Mar 13 00:17:50.273: INFO: Pod "downwardapi-volume-8d13b1a9-a658-4bf8-85cd-12b3f9cdeadc" satisfied condition "Succeeded or Failed" Mar 13 00:17:50.276: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-8d13b1a9-a658-4bf8-85cd-12b3f9cdeadc container client-container: STEP: delete the pod Mar 13 00:17:50.295: INFO: Waiting for pod downwardapi-volume-8d13b1a9-a658-4bf8-85cd-12b3f9cdeadc to disappear Mar 13 00:17:50.297: INFO: Pod downwardapi-volume-8d13b1a9-a658-4bf8-85cd-12b3f9cdeadc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:17:50.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9117" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":2632,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:17:50.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:17:50.356: INFO: Creating deployment "webserver-deployment" Mar 13 00:17:50.361: INFO: Waiting for observed generation 1 Mar 13 00:17:52.384: INFO: Waiting for all required pods to come up Mar 13 00:17:52.387: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 13 00:17:56.393: INFO: Waiting for deployment "webserver-deployment" to complete Mar 13 00:17:56.398: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 13 00:17:56.403: INFO: Updating deployment webserver-deployment Mar 13 00:17:56.403: INFO: Waiting for observed generation 2 Mar 13 00:17:58.419: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 13 00:17:58.421: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 13 00:17:58.423: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 13 00:17:58.429: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 13 00:17:58.429: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 13 00:17:58.431: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 13 00:17:58.435: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 13 00:17:58.435: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 13 00:17:58.441: INFO: Updating deployment webserver-deployment Mar 13 00:17:58.441: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 13 00:17:58.483: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 13 00:17:58.495: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 13 00:17:58.564: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-435 /apis/apps/v1/namespaces/deployment-435/deployments/webserver-deployment 398b787f-6aad-49e4-9496-9956f0858577 1221949 3 2020-03-13 00:17:50 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0005bf688 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-13 00:17:56 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-13 00:17:58 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 13 00:17:58.633: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-435 /apis/apps/v1/namespaces/deployment-435/replicasets/webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 1221933 3 2020-03-13 00:17:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 398b787f-6aad-49e4-9496-9956f0858577 0xc000335fc7 0xc000335fc8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0007a44c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 13 00:17:58.633: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 13 00:17:58.633: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-435 /apis/apps/v1/namespaces/deployment-435/replicasets/webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 1221973 3 2020-03-13 00:17:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 398b787f-6aad-49e4-9496-9956f0858577 0xc000335d97 0xc000335d98}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000335f18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 13 00:17:58.662: INFO: Pod "webserver-deployment-595b5b9587-5gsxg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5gsxg webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-5gsxg 8d05d68e-3f6e-4def-9e08-292037b5b9f1 1221944 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc00065bb27 0xc00065bb28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.663: INFO: Pod "webserver-deployment-595b5b9587-758xh" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-758xh webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-758xh e0f4ba19-ecba-4822-b50f-1c91437c7de9 1221814 0 2020-03-13 00:17:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc000aca0c7 0xc000aca0c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.18,StartTime:2020-03-13 00:17:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-13 00:17:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://723cb3e38ac409d7440087176fb44d73fa6eeb0c1e8a162f36b7abbf15a2c228,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.663: INFO: Pod "webserver-deployment-595b5b9587-7tsfd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7tsfd webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-7tsfd 21042b28-3bd2-4c66-9fed-2130fe7f8a72 1221986 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc000aca2b7 0xc000aca2b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-13 00:17:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.663: INFO: Pod "webserver-deployment-595b5b9587-7xf6q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7xf6q webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-7xf6q 1149414b-66cd-442b-b897-fc1e13720a99 1221953 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc000aca5a7 0xc000aca5a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.663: INFO: Pod "webserver-deployment-595b5b9587-9rlfz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9rlfz webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-9rlfz fedc4b14-1502-4fcc-b3b1-13537c497566 1221839 0 2020-03-13 00:17:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc000aca757 0xc000aca758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.21,StartTime:2020-03-13 00:17:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-13 00:17:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://179ed7d175425dda359d80f200b075d09c00787af1b3ef56413c7d119b28d436,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.663: INFO: Pod "webserver-deployment-595b5b9587-d4xh9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d4xh9 webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-d4xh9 d59eeda8-7c70-4e4f-9dcd-ace9d7ee6388 1221978 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc000acab47 0xc000acab48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.663: INFO: Pod "webserver-deployment-595b5b9587-dns2v" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dns2v webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-dns2v dc8b4757-cd66-4117-864b-10dcb626c449 1221807 0 2020-03-13 00:17:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc000acae37 0xc000acae38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.17,StartTime:2020-03-13 00:17:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-13 00:17:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2b09e78d96741671fff018ac72c2978e4b358de5b768a5868c6b96cb8106d1cd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.664: INFO: Pod "webserver-deployment-595b5b9587-dwllv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dwllv webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-dwllv 86e986d3-f15c-43ef-bcf3-f313211f922f 1221981 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc000acb487 0xc000acb488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.664: INFO: Pod "webserver-deployment-595b5b9587-g5lcp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g5lcp webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-g5lcp 17e99a48-2c53-4f40-8d7c-49e748d0809e 1221975 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc000acb727 0xc000acb728}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.664: INFO: Pod "webserver-deployment-595b5b9587-j4qs5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-j4qs5 webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-j4qs5 558e1c27-a9d5-4b4c-9555-96ef900385f2 1221976 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc000acba67 0xc000acba68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.664: INFO: Pod "webserver-deployment-595b5b9587-k8q6f" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-k8q6f webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-k8q6f c0fc430e-8943-4388-b1f3-29a3155a7834 1221948 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc000acbc37 0xc000acbc38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.664: INFO: Pod "webserver-deployment-595b5b9587-kgfvw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kgfvw webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-kgfvw e77e5c9a-62d5-44a2-9e55-8b79a79cf7f6 1221956 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc000acbe47 0xc000acbe48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.664: INFO: Pod "webserver-deployment-595b5b9587-l9l6q" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-l9l6q webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-l9l6q 3b200640-f181-4d37-9ec7-471c731778b9 1221834 0 2020-03-13 00:17:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc0004fa8a7 0xc0004fa8a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.237,StartTime:2020-03-13 00:17:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-13 00:17:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3e81e015b393cb7146116c7f93751a6ec4bcd03067e653e2132b9a366dbbc72a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.237,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.664: INFO: Pod "webserver-deployment-595b5b9587-lc6b9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lc6b9 webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-lc6b9 548c7832-6b0f-4c54-962b-598b2bb1647c 1221825 0 2020-03-13 00:17:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc0004fb747 0xc0004fb748}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.239,StartTime:2020-03-13 00:17:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-13 00:17:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8a5773df9033eda0a9566f5781e00fcfaa762824d6b6b6a441ae1a00080425f1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.239,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.665: INFO: Pod "webserver-deployment-595b5b9587-qpxgh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qpxgh webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-qpxgh 73629871-f7db-4b70-a22a-4d2b30447de4 1221955 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc00081c0d7 0xc00081c0d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.665: INFO: Pod "webserver-deployment-595b5b9587-rdm7c" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rdm7c webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-rdm7c 8bc268df-b52f-4b4b-87db-ab427a925e39 1221954 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc00081c277 0xc00081c278}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.665: INFO: Pod "webserver-deployment-595b5b9587-sgths" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sgths webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-sgths f104056d-2b29-49c0-8863-c1fdc89e05d1 1221827 0 2020-03-13 00:17:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc00081ca87 0xc00081ca88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.235,StartTime:2020-03-13 00:17:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-13 00:17:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://80a38d56a00933939dfe45848defd0d8b31a5421137ada91e854f7558d104159,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.235,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.665: INFO: Pod "webserver-deployment-595b5b9587-v62m5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v62m5 webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-v62m5 911641c9-1b73-4b70-a22f-a16906fd8ff3 1221843 0 2020-03-13 00:17:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc00081d0e7 0xc00081d0e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.20,StartTime:2020-03-13 00:17:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-13 00:17:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a87e40aef5cf8ca8dad425ee08bc2de7244a551c6dedb250162f516be8a47216,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.665: INFO: Pod "webserver-deployment-595b5b9587-w7qxw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w7qxw webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-w7qxw bdf6ee1c-167a-4ff7-acb3-dd6fc2df0ede 1221837 0 2020-03-13 00:17:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc00081ddf7 0xc00081ddf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.19,StartTime:2020-03-13 00:17:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-13 00:17:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b2d6d0e992a2ffdba1245bd66fcc4bf76ce0eb752b2e7633f6340513d2b4b3be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.666: INFO: Pod "webserver-deployment-595b5b9587-z5z85" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z5z85 webserver-deployment-595b5b9587- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-595b5b9587-z5z85 e79742c2-2201-4f53-b8a5-4c04664b960b 1221971 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 9a8e37d6-d1e5-42e5-9d98-d8ec168bd52e 0xc000f5e1c7 0xc000f5e1c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.666: INFO: Pod "webserver-deployment-c7997dcc8-2cpd5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2cpd5 webserver-deployment-c7997dcc8- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-c7997dcc8-2cpd5 8ba4cafe-42b9-43c6-a982-eb3b7e9c8fbc 1221958 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 0xc000f5e377 0xc000f5e378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.666: INFO: Pod "webserver-deployment-c7997dcc8-2lvkm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2lvkm webserver-deployment-c7997dcc8- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-c7997dcc8-2lvkm e92f6ffd-307a-4759-9f8e-56fc13dc6b3c 1221982 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 0xc000f5e567 0xc000f5e568}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.666: INFO: Pod "webserver-deployment-c7997dcc8-5xv7n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5xv7n webserver-deployment-c7997dcc8- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-c7997dcc8-5xv7n f7e45049-4f32-42ae-afd1-23c4d89b7e53 1221912 0 2020-03-13 00:17:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 0xc000f5e747 0xc000f5e748}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-13 00:17:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.666: INFO: Pod "webserver-deployment-c7997dcc8-8fslf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8fslf webserver-deployment-c7997dcc8- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-c7997dcc8-8fslf 8499895c-50d7-428e-9d75-1b4520db1f06 1221942 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 0xc000f5ea17 0xc000f5ea18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.666: INFO: Pod "webserver-deployment-c7997dcc8-h2kbw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-h2kbw webserver-deployment-c7997dcc8- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-c7997dcc8-h2kbw da61ebe0-9358-4c6b-b917-d9c230f51ec4 1221977 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 0xc000f5edf7 0xc000f5edf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.667: INFO: Pod "webserver-deployment-c7997dcc8-k24wc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k24wc webserver-deployment-c7997dcc8- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-c7997dcc8-k24wc 494d8702-05fe-4130-b6b5-4c002cf1fae4 1221911 0 2020-03-13 00:17:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 0xc000f5f067 0xc000f5f068}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-03-13 00:17:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.667: INFO: Pod "webserver-deployment-c7997dcc8-kpwxc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kpwxc webserver-deployment-c7997dcc8- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-c7997dcc8-kpwxc 657f6e6c-7c7f-4577-8c92-7956082f1637 1221883 0 2020-03-13 00:17:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 0xc000f5f437 0xc000f5f438}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-13 00:17:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.667: INFO: Pod "webserver-deployment-c7997dcc8-l5jr8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l5jr8 webserver-deployment-c7997dcc8- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-c7997dcc8-l5jr8 901f7378-657a-4047-ba65-93b13f11fb11 1221929 0 2020-03-13 00:17:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 0xc000f5f7c7 0xc000f5f7c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.22,StartTime:2020-03-13 00:17:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.668: INFO: Pod "webserver-deployment-c7997dcc8-n9k9w" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n9k9w webserver-deployment-c7997dcc8- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-c7997dcc8-n9k9w 73124b96-d2f4-4605-a232-2e4cebd5e09b 1221987 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 0xc000f5fbb7 0xc000f5fbb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.668: INFO: Pod "webserver-deployment-c7997dcc8-npfnb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-npfnb webserver-deployment-c7997dcc8- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-c7997dcc8-npfnb 05f9740e-88fc-4117-89a6-d9b132a89b8e 1221979 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 0xc000f5ff47 0xc000f5ff48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.668: INFO: Pod "webserver-deployment-c7997dcc8-phpsk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-phpsk webserver-deployment-c7997dcc8- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-c7997dcc8-phpsk 72dfde96-7353-4072-8c74-59a5773dc87c 1221905 0 2020-03-13 00:17:56 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 0xc003afc147 0xc003afc148}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:,StartTime:2020-03-13 00:17:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.668: INFO: Pod "webserver-deployment-c7997dcc8-v4cxp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v4cxp webserver-deployment-c7997dcc8- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-c7997dcc8-v4cxp 6684a2c6-0fa9-4bcd-86ef-05155a2e4d4c 1221959 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 0xc003afc327 0xc003afc328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 13 00:17:58.668: INFO: Pod "webserver-deployment-c7997dcc8-w6xx4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w6xx4 webserver-deployment-c7997dcc8- deployment-435 /api/v1/namespaces/deployment-435/pods/webserver-deployment-c7997dcc8-w6xx4 8c6fd865-db92-4614-b51c-f790981040ff 1221980 0 2020-03-13 00:17:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e1b82154-9e68-4753-9e19-df7ef2f3c438 0xc003afc487 0xc003afc488}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lgr78,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lgr78,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lgr78,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:17:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:17:58.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-435" for this suite. • [SLOW TEST:8.524 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":163,"skipped":2639,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:17:58.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:17:59.190: INFO: Creating deployment "test-recreate-deployment" Mar 13 00:17:59.211: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 13 00:17:59.308: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 13 00:18:01.311: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 13 00:18:01.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655479, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655479, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655479, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655479, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 13 00:18:03.315: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655479, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655479, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655479, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655479, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 13 00:18:05.334: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 13 00:18:05.400: INFO: Updating deployment test-recreate-deployment Mar 13 00:18:05.400: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 13 00:18:06.000: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7800 /apis/apps/v1/namespaces/deployment-7800/deployments/test-recreate-deployment 5a8ec6a2-9670-49fe-a970-593643271960 1222232 2 2020-03-13 00:17:59 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0011531f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-13 00:18:05 +0000 UTC,LastTransitionTime:2020-03-13 00:18:05 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-13 00:18:05 +0000 UTC,LastTransitionTime:2020-03-13 00:17:59 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 13 00:18:06.022: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-7800 /apis/apps/v1/namespaces/deployment-7800/replicasets/test-recreate-deployment-5f94c574ff a624adc3-18e0-4551-9436-656feeba62e1 1222229 1 2020-03-13 00:18:05 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 5a8ec6a2-9670-49fe-a970-593643271960 0xc001153b97 0xc001153b98}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001153c48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 13 00:18:06.022: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 13 00:18:06.022: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-7800 /apis/apps/v1/namespaces/deployment-7800/replicasets/test-recreate-deployment-846c7dd955 3608ab12-6c7e-4cb7-91a2-5011e945d0c0 1222211 2 2020-03-13 00:17:59 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 5a8ec6a2-9670-49fe-a970-593643271960 0xc001153cc7 0xc001153cc8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001153d48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 13 00:18:06.063: INFO: Pod "test-recreate-deployment-5f94c574ff-89mbg" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-89mbg test-recreate-deployment-5f94c574ff- deployment-7800 /api/v1/namespaces/deployment-7800/pods/test-recreate-deployment-5f94c574ff-89mbg 8b0a706b-97be-4662-9829-c5f6f50a47ee 1222221 0 2020-03-13 00:18:05 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff a624adc3-18e0-4551-9436-656feeba62e1 0xc00421c937 0xc00421c938}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2tphg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2tphg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2tphg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:18:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:18:06.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7800" for this suite. • [SLOW TEST:7.333 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":164,"skipped":2653,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:18:06.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:18:06.315: INFO: The status of Pod test-webserver-781df065-719c-4d9e-a17f-bfa62fe77152 is Pending, waiting for it to be Running (with Ready = true) Mar 13 00:18:08.318: INFO: The status of Pod test-webserver-781df065-719c-4d9e-a17f-bfa62fe77152 is Pending, waiting for it to be Running (with Ready = true) Mar 13 00:18:10.318: INFO: The status of Pod test-webserver-781df065-719c-4d9e-a17f-bfa62fe77152 is Pending, waiting for it to be Running (with Ready = true) Mar 13 00:18:12.331: INFO: The status of Pod test-webserver-781df065-719c-4d9e-a17f-bfa62fe77152 is Running (Ready = false) Mar 13 00:18:14.318: INFO: The status of Pod test-webserver-781df065-719c-4d9e-a17f-bfa62fe77152 is Running (Ready = false) Mar 13 00:18:16.318: INFO: The status of Pod test-webserver-781df065-719c-4d9e-a17f-bfa62fe77152 is Running (Ready = false) Mar 13 00:18:18.318: INFO: The status of Pod test-webserver-781df065-719c-4d9e-a17f-bfa62fe77152 is Running (Ready = false) Mar 13 00:18:20.607: INFO: The status of Pod test-webserver-781df065-719c-4d9e-a17f-bfa62fe77152 is Running (Ready = false) Mar 13 00:18:22.318: INFO: The status of Pod test-webserver-781df065-719c-4d9e-a17f-bfa62fe77152 is Running (Ready = false) Mar 13 00:18:24.318: INFO: The status of Pod test-webserver-781df065-719c-4d9e-a17f-bfa62fe77152 is Running (Ready = false) Mar 13 00:18:26.318: INFO: The status of Pod test-webserver-781df065-719c-4d9e-a17f-bfa62fe77152 is Running (Ready = true) Mar 13 00:18:26.320: INFO: Container started at 2020-03-13 00:18:08 +0000 UTC, pod became ready at 2020-03-13 00:18:24 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:18:26.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1486" for this suite. • [SLOW TEST:20.166 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2694,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:18:26.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:18:42.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-812" for this suite. • [SLOW TEST:16.298 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":166,"skipped":2699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:18:42.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-5c618698-4fee-473a-b003-c3b1aee4f503 STEP: Creating a pod to test consume configMaps Mar 13 00:18:42.671: INFO: Waiting up to 5m0s for pod "pod-configmaps-80509e34-d569-4022-b479-454098a4aada" in namespace "configmap-8122" to be "Succeeded or Failed" Mar 13 00:18:42.676: INFO: Pod "pod-configmaps-80509e34-d569-4022-b479-454098a4aada": Phase="Pending", Reason="", readiness=false. Elapsed: 4.77249ms Mar 13 00:18:44.683: INFO: Pod "pod-configmaps-80509e34-d569-4022-b479-454098a4aada": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011970775s Mar 13 00:18:46.687: INFO: Pod "pod-configmaps-80509e34-d569-4022-b479-454098a4aada": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015721569s STEP: Saw pod success Mar 13 00:18:46.687: INFO: Pod "pod-configmaps-80509e34-d569-4022-b479-454098a4aada" satisfied condition "Succeeded or Failed" Mar 13 00:18:46.690: INFO: Trying to get logs from node latest-worker pod pod-configmaps-80509e34-d569-4022-b479-454098a4aada container configmap-volume-test: STEP: delete the pod Mar 13 00:18:46.718: INFO: Waiting for pod pod-configmaps-80509e34-d569-4022-b479-454098a4aada to disappear Mar 13 00:18:46.720: INFO: Pod pod-configmaps-80509e34-d569-4022-b479-454098a4aada no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:18:46.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8122" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2747,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:18:46.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:18:46.841: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-7c7328a6-d6a1-4c0e-a7c3-7cfbf79497fb" in namespace "security-context-test-9683" to be "Succeeded or Failed" Mar 13 00:18:46.845: INFO: Pod "alpine-nnp-false-7c7328a6-d6a1-4c0e-a7c3-7cfbf79497fb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.989574ms Mar 13 00:18:48.859: INFO: Pod "alpine-nnp-false-7c7328a6-d6a1-4c0e-a7c3-7cfbf79497fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017911401s Mar 13 00:18:48.859: INFO: Pod "alpine-nnp-false-7c7328a6-d6a1-4c0e-a7c3-7cfbf79497fb" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:18:48.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9683" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":168,"skipped":2763,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:18:48.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-f07883f6-ef33-4bfd-9031-776a44c41173 STEP: Creating secret with name s-test-opt-upd-dc251de2-fbad-4efe-8b66-1438c5f67eb8 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f07883f6-ef33-4bfd-9031-776a44c41173 STEP: Updating secret s-test-opt-upd-dc251de2-fbad-4efe-8b66-1438c5f67eb8 STEP: Creating secret with name s-test-opt-create-aacf9bf4-5aba-4509-8227-e3a45a17a1b7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:20:21.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7934" for this suite. • [SLOW TEST:92.482 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":169,"skipped":2788,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:20:21.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-3140/configmap-test-f2e40c0d-b2ce-4c83-85e6-7d4f65d7ee0b STEP: Creating a pod to test consume configMaps Mar 13 00:20:21.422: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b8fe970-e40d-47b8-a34e-3db40a6ab83d" in namespace "configmap-3140" to be "Succeeded or Failed" Mar 13 00:20:21.441: INFO: Pod "pod-configmaps-9b8fe970-e40d-47b8-a34e-3db40a6ab83d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.212381ms Mar 13 00:20:23.444: INFO: Pod "pod-configmaps-9b8fe970-e40d-47b8-a34e-3db40a6ab83d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021314942s Mar 13 00:20:25.447: INFO: Pod "pod-configmaps-9b8fe970-e40d-47b8-a34e-3db40a6ab83d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024569033s STEP: Saw pod success Mar 13 00:20:25.447: INFO: Pod "pod-configmaps-9b8fe970-e40d-47b8-a34e-3db40a6ab83d" satisfied condition "Succeeded or Failed" Mar 13 00:20:25.449: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-9b8fe970-e40d-47b8-a34e-3db40a6ab83d container env-test: STEP: delete the pod Mar 13 00:20:25.489: INFO: Waiting for pod pod-configmaps-9b8fe970-e40d-47b8-a34e-3db40a6ab83d to disappear Mar 13 00:20:25.493: INFO: Pod pod-configmaps-9b8fe970-e40d-47b8-a34e-3db40a6ab83d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:20:25.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3140" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":2816,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:20:25.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 13 00:20:25.568: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ffc91fe5-de17-42b1-8ca9-f4970a716e72" in namespace "downward-api-7590" to be "Succeeded or Failed" Mar 13 00:20:25.588: INFO: Pod "downwardapi-volume-ffc91fe5-de17-42b1-8ca9-f4970a716e72": Phase="Pending", Reason="", readiness=false. Elapsed: 20.393839ms Mar 13 00:20:27.592: INFO: Pod "downwardapi-volume-ffc91fe5-de17-42b1-8ca9-f4970a716e72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023629974s Mar 13 00:20:29.594: INFO: Pod "downwardapi-volume-ffc91fe5-de17-42b1-8ca9-f4970a716e72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026331746s STEP: Saw pod success Mar 13 00:20:29.594: INFO: Pod "downwardapi-volume-ffc91fe5-de17-42b1-8ca9-f4970a716e72" satisfied condition "Succeeded or Failed" Mar 13 00:20:29.596: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ffc91fe5-de17-42b1-8ca9-f4970a716e72 container client-container: STEP: delete the pod Mar 13 00:20:29.608: INFO: Waiting for pod downwardapi-volume-ffc91fe5-de17-42b1-8ca9-f4970a716e72 to disappear Mar 13 00:20:29.613: INFO: Pod downwardapi-volume-ffc91fe5-de17-42b1-8ca9-f4970a716e72 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:20:29.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7590" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":2830,"failed":0} SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:20:29.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 13 00:20:32.812: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:20:32.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2338" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2835,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:20:32.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 13 00:20:32.894: INFO: Waiting up to 5m0s for pod "pod-2113b82f-2b27-42d5-bd3c-431528fd5290" in namespace "emptydir-5561" to be "Succeeded or Failed" Mar 13 00:20:32.898: INFO: Pod "pod-2113b82f-2b27-42d5-bd3c-431528fd5290": Phase="Pending", Reason="", readiness=false. Elapsed: 3.977258ms Mar 13 00:20:34.902: INFO: Pod "pod-2113b82f-2b27-42d5-bd3c-431528fd5290": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007580802s Mar 13 00:20:36.906: INFO: Pod "pod-2113b82f-2b27-42d5-bd3c-431528fd5290": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01131053s STEP: Saw pod success Mar 13 00:20:36.906: INFO: Pod "pod-2113b82f-2b27-42d5-bd3c-431528fd5290" satisfied condition "Succeeded or Failed" Mar 13 00:20:36.908: INFO: Trying to get logs from node latest-worker pod pod-2113b82f-2b27-42d5-bd3c-431528fd5290 container test-container: STEP: delete the pod Mar 13 00:20:36.946: INFO: Waiting for pod pod-2113b82f-2b27-42d5-bd3c-431528fd5290 to disappear Mar 13 00:20:36.953: INFO: Pod pod-2113b82f-2b27-42d5-bd3c-431528fd5290 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:20:36.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5561" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":2838,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:20:36.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:20:37.039: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-87868e0d-3094-443f-851b-37de19b3088d" in namespace "security-context-test-8276" to be "Succeeded or Failed" Mar 13 00:20:37.043: INFO: Pod "busybox-privileged-false-87868e0d-3094-443f-851b-37de19b3088d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.636286ms Mar 13 00:20:39.050: INFO: Pod "busybox-privileged-false-87868e0d-3094-443f-851b-37de19b3088d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010722291s Mar 13 00:20:39.050: INFO: Pod "busybox-privileged-false-87868e0d-3094-443f-851b-37de19b3088d" satisfied condition "Succeeded or Failed" Mar 13 00:20:39.056: INFO: Got logs for pod "busybox-privileged-false-87868e0d-3094-443f-851b-37de19b3088d": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:20:39.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8276" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2844,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:20:39.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 13 00:20:41.641: INFO: Successfully updated pod "pod-update-c6d42c5b-9705-453a-8b74-bed0b3d8bba1" STEP: verifying the updated pod is in kubernetes Mar 13 00:20:41.647: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:20:41.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-950" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":2855,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:20:41.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 13 00:20:41.769: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9592 /api/v1/namespaces/watch-9592/configmaps/e2e-watch-test-resource-version c896293f-6fed-4db7-94f4-1265e623b5c7 1223124 0 2020-03-13 00:20:41 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 13 00:20:41.769: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9592 /api/v1/namespaces/watch-9592/configmaps/e2e-watch-test-resource-version c896293f-6fed-4db7-94f4-1265e623b5c7 1223125 0 2020-03-13 00:20:41 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:20:41.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9592" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":176,"skipped":2863,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:20:41.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 13 00:20:42.238: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 13 00:20:44.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655642, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655642, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655642, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655642, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 13 00:20:47.279: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 13 00:20:47.295: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:20:47.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1409" for this suite. STEP: Destroying namespace "webhook-1409-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.635 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":177,"skipped":2865,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:20:47.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 13 00:20:51.999: INFO: Successfully updated pod "annotationupdate20319093-8488-452f-b875-4e07fcea0437" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:20:54.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7602" for this suite. • [SLOW TEST:6.629 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":178,"skipped":2895,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:20:54.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-0a3e7cfb-ea52-4d22-bc7b-fa999167123e STEP: Creating secret with name s-test-opt-upd-2332752b-2c23-43bc-a82b-e72c48e09b55 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0a3e7cfb-ea52-4d22-bc7b-fa999167123e STEP: Updating secret s-test-opt-upd-2332752b-2c23-43bc-a82b-e72c48e09b55 STEP: Creating secret with name s-test-opt-create-e134da3e-c79b-4dfa-8c3d-c9f934e9979f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:22:30.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8092" for this suite. • [SLOW TEST:96.757 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":2922,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:22:30.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 13 00:22:31.539: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 13 00:22:34.602: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:22:46.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1242" for this suite. STEP: Destroying namespace "webhook-1242-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.012 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":180,"skipped":2965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:22:46.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 13 00:22:47.401: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 13 00:22:49.417: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655767, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655767, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655767, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655767, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 13 00:22:52.457: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:22:52.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-303-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:22:53.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-55" for this suite. STEP: Destroying namespace "webhook-55-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.962 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":181,"skipped":2999,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:22:53.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:22:53.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3270" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":182,"skipped":3003,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:22:53.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-27eefb67-9f82-4495-9aaa-4dd903fd8e63 STEP: Creating secret with name secret-projected-all-test-volume-9d81307d-6e02-469e-8629-fd7015e6fcad STEP: Creating a pod to test Check all projections for projected volume plugin Mar 13 00:22:53.978: INFO: Waiting up to 5m0s for pod "projected-volume-83dd866f-4782-4c34-a005-24da8d9293ba" in namespace "projected-1264" to be "Succeeded or Failed" Mar 13 00:22:53.998: INFO: Pod "projected-volume-83dd866f-4782-4c34-a005-24da8d9293ba": Phase="Pending", Reason="", readiness=false. Elapsed: 19.701405ms Mar 13 00:22:56.002: INFO: Pod "projected-volume-83dd866f-4782-4c34-a005-24da8d9293ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023246519s STEP: Saw pod success Mar 13 00:22:56.002: INFO: Pod "projected-volume-83dd866f-4782-4c34-a005-24da8d9293ba" satisfied condition "Succeeded or Failed" Mar 13 00:22:56.004: INFO: Trying to get logs from node latest-worker pod projected-volume-83dd866f-4782-4c34-a005-24da8d9293ba container projected-all-volume-test: STEP: delete the pod Mar 13 00:22:56.026: INFO: Waiting for pod projected-volume-83dd866f-4782-4c34-a005-24da8d9293ba to disappear Mar 13 00:22:56.036: INFO: Pod projected-volume-83dd866f-4782-4c34-a005-24da8d9293ba no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:22:56.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1264" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3043,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:22:56.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-884f2c81-4763-4df0-8859-bb2e3693de55 STEP: Creating a pod to test consume secrets Mar 13 00:22:56.128: INFO: Waiting up to 5m0s for pod "pod-secrets-620c0c77-814c-422b-8a21-4edb29b449d5" in namespace "secrets-1818" to be "Succeeded or Failed" Mar 13 00:22:56.152: INFO: Pod "pod-secrets-620c0c77-814c-422b-8a21-4edb29b449d5": Phase="Pending", Reason="", readiness=false. Elapsed: 23.378028ms Mar 13 00:22:58.155: INFO: Pod "pod-secrets-620c0c77-814c-422b-8a21-4edb29b449d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026727596s Mar 13 00:23:00.159: INFO: Pod "pod-secrets-620c0c77-814c-422b-8a21-4edb29b449d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030321414s STEP: Saw pod success Mar 13 00:23:00.159: INFO: Pod "pod-secrets-620c0c77-814c-422b-8a21-4edb29b449d5" satisfied condition "Succeeded or Failed" Mar 13 00:23:00.161: INFO: Trying to get logs from node latest-worker pod pod-secrets-620c0c77-814c-422b-8a21-4edb29b449d5 container secret-volume-test: STEP: delete the pod Mar 13 00:23:00.202: INFO: Waiting for pod pod-secrets-620c0c77-814c-422b-8a21-4edb29b449d5 to disappear Mar 13 00:23:00.209: INFO: Pod pod-secrets-620c0c77-814c-422b-8a21-4edb29b449d5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:23:00.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1818" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3050,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:23:00.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 13 00:23:00.279: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 13 00:23:00.293: INFO: Waiting for terminating namespaces to be deleted... Mar 13 00:23:00.294: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 13 00:23:00.298: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 13 00:23:00.298: INFO: Container kube-proxy ready: true, restart count 0 Mar 13 00:23:00.298: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 13 00:23:00.298: INFO: Container kindnet-cni ready: true, restart count 0 Mar 13 00:23:00.298: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 13 00:23:00.308: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 13 00:23:00.308: INFO: Container kindnet-cni ready: true, restart count 0 Mar 13 00:23:00.308: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 13 00:23:00.308: INFO: Container coredns ready: true, restart count 0 Mar 13 00:23:00.308: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 13 00:23:00.308: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-25190d8a-3107-48ed-beb6-31aa738ce2c3 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-25190d8a-3107-48ed-beb6-31aa738ce2c3 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-25190d8a-3107-48ed-beb6-31aa738ce2c3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:23:04.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7730" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":185,"skipped":3063,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:23:04.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 13 00:23:07.117: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5f89dcc3-8352-4488-937c-b7526403e774" Mar 13 00:23:07.117: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5f89dcc3-8352-4488-937c-b7526403e774" in namespace "pods-5329" to be "terminated due to deadline exceeded" Mar 13 00:23:07.126: INFO: Pod "pod-update-activedeadlineseconds-5f89dcc3-8352-4488-937c-b7526403e774": Phase="Running", Reason="", readiness=true. Elapsed: 9.368779ms Mar 13 00:23:09.130: INFO: Pod "pod-update-activedeadlineseconds-5f89dcc3-8352-4488-937c-b7526403e774": Phase="Running", Reason="", readiness=true. Elapsed: 2.013236343s Mar 13 00:23:11.149: INFO: Pod "pod-update-activedeadlineseconds-5f89dcc3-8352-4488-937c-b7526403e774": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.031875531s Mar 13 00:23:11.149: INFO: Pod "pod-update-activedeadlineseconds-5f89dcc3-8352-4488-937c-b7526403e774" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:23:11.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5329" for this suite. • [SLOW TEST:6.673 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3113,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:23:11.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:23:28.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3847" for this suite. • [SLOW TEST:17.130 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":187,"skipped":3115,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:23:28.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:23:28.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 13 00:23:28.865: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-13T00:23:28Z generation:1 name:name1 resourceVersion:1224095 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a67ab21f-4eae-4e39-89ba-60a02130ba70] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 13 00:23:38.871: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-13T00:23:38Z generation:1 name:name2 resourceVersion:1224134 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:05949b81-192c-4207-9452-0ec9e6983b7e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 13 00:23:48.877: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-13T00:23:28Z generation:2 name:name1 resourceVersion:1224164 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a67ab21f-4eae-4e39-89ba-60a02130ba70] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 13 00:23:58.889: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-13T00:23:38Z generation:2 name:name2 resourceVersion:1224193 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:05949b81-192c-4207-9452-0ec9e6983b7e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 13 00:24:08.908: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-13T00:23:28Z generation:2 name:name1 resourceVersion:1224222 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a67ab21f-4eae-4e39-89ba-60a02130ba70] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 13 00:24:18.916: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-13T00:23:38Z generation:2 name:name2 resourceVersion:1224251 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:05949b81-192c-4207-9452-0ec9e6983b7e] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:24:29.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5829" for this suite. • [SLOW TEST:61.141 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":188,"skipped":3165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:24:29.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0313 00:24:39.561013 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 13 00:24:39.561: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:24:39.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5029" for this suite. • [SLOW TEST:10.133 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":189,"skipped":3194,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:24:39.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:24:39.675: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 13 00:24:39.699: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:39.709: INFO: Number of nodes with available pods: 0 Mar 13 00:24:39.709: INFO: Node latest-worker is running more than one daemon pod Mar 13 00:24:40.713: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:40.715: INFO: Number of nodes with available pods: 0 Mar 13 00:24:40.715: INFO: Node latest-worker is running more than one daemon pod Mar 13 00:24:41.714: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:41.717: INFO: Number of nodes with available pods: 1 Mar 13 00:24:41.717: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:24:42.714: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:42.716: INFO: Number of nodes with available pods: 2 Mar 13 00:24:42.716: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 13 00:24:42.752: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:42.752: INFO: Wrong image for pod: daemon-set-hqzzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:42.759: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:43.765: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:43.765: INFO: Wrong image for pod: daemon-set-hqzzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:43.769: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:44.765: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:44.765: INFO: Wrong image for pod: daemon-set-hqzzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:44.793: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:45.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:45.763: INFO: Wrong image for pod: daemon-set-hqzzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:45.763: INFO: Pod daemon-set-hqzzh is not available Mar 13 00:24:45.767: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:46.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:46.763: INFO: Wrong image for pod: daemon-set-hqzzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:46.763: INFO: Pod daemon-set-hqzzh is not available Mar 13 00:24:46.765: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:47.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:47.763: INFO: Wrong image for pod: daemon-set-hqzzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:47.763: INFO: Pod daemon-set-hqzzh is not available Mar 13 00:24:47.765: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:48.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:48.763: INFO: Wrong image for pod: daemon-set-hqzzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:48.763: INFO: Pod daemon-set-hqzzh is not available Mar 13 00:24:48.767: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:49.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:49.763: INFO: Wrong image for pod: daemon-set-hqzzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:49.763: INFO: Pod daemon-set-hqzzh is not available Mar 13 00:24:49.767: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:50.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:50.763: INFO: Wrong image for pod: daemon-set-hqzzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:50.763: INFO: Pod daemon-set-hqzzh is not available Mar 13 00:24:50.766: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:51.764: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:51.764: INFO: Wrong image for pod: daemon-set-hqzzh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:51.764: INFO: Pod daemon-set-hqzzh is not available Mar 13 00:24:51.768: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:52.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:52.763: INFO: Pod daemon-set-qq7lq is not available Mar 13 00:24:52.767: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:53.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:53.763: INFO: Pod daemon-set-qq7lq is not available Mar 13 00:24:53.776: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:54.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:54.767: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:55.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:55.763: INFO: Pod daemon-set-66z58 is not available Mar 13 00:24:55.766: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:56.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:56.763: INFO: Pod daemon-set-66z58 is not available Mar 13 00:24:56.766: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:57.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:57.763: INFO: Pod daemon-set-66z58 is not available Mar 13 00:24:57.785: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:58.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:58.763: INFO: Pod daemon-set-66z58 is not available Mar 13 00:24:58.766: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:24:59.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:24:59.763: INFO: Pod daemon-set-66z58 is not available Mar 13 00:24:59.767: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:25:00.781: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:25:00.781: INFO: Pod daemon-set-66z58 is not available Mar 13 00:25:00.784: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:25:01.763: INFO: Wrong image for pod: daemon-set-66z58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 13 00:25:01.763: INFO: Pod daemon-set-66z58 is not available Mar 13 00:25:01.767: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:25:02.762: INFO: Pod daemon-set-lr7tn is not available Mar 13 00:25:02.766: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 13 00:25:02.769: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:25:02.771: INFO: Number of nodes with available pods: 1 Mar 13 00:25:02.771: INFO: Node latest-worker is running more than one daemon pod Mar 13 00:25:03.775: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:25:03.777: INFO: Number of nodes with available pods: 1 Mar 13 00:25:03.777: INFO: Node latest-worker is running more than one daemon pod Mar 13 00:25:04.776: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:25:04.782: INFO: Number of nodes with available pods: 2 Mar 13 00:25:04.782: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6370, will wait for the garbage collector to delete the pods Mar 13 00:25:04.853: INFO: Deleting DaemonSet.extensions daemon-set took: 5.387021ms Mar 13 00:25:05.153: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.276573ms Mar 13 00:25:12.556: INFO: Number of nodes with available pods: 0 Mar 13 00:25:12.556: INFO: Number of running nodes: 0, number of available pods: 0 Mar 13 00:25:12.559: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6370/daemonsets","resourceVersion":"1224551"},"items":null} Mar 13 00:25:12.561: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6370/pods","resourceVersion":"1224551"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:25:12.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6370" for this suite. • [SLOW TEST:33.007 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":190,"skipped":3209,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:25:12.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4047, will wait for the garbage collector to delete the pods Mar 13 00:25:16.721: INFO: Deleting Job.batch foo took: 3.154422ms Mar 13 00:25:16.821: INFO: Terminating Job.batch foo pods took: 100.158134ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:25:52.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4047" for this suite. • [SLOW TEST:39.954 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":191,"skipped":3231,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:25:52.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 13 00:25:54.623: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:25:54.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1632" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3234,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:25:54.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 13 00:25:54.784: INFO: Waiting up to 5m0s for pod "downwardapi-volume-48caa9d3-e3b9-45d0-944a-7d8eab4abbc9" in namespace "projected-7448" to be "Succeeded or Failed" Mar 13 00:25:54.814: INFO: Pod "downwardapi-volume-48caa9d3-e3b9-45d0-944a-7d8eab4abbc9": Phase="Pending", Reason="", readiness=false. Elapsed: 30.181674ms Mar 13 00:25:56.818: INFO: Pod "downwardapi-volume-48caa9d3-e3b9-45d0-944a-7d8eab4abbc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034061915s STEP: Saw pod success Mar 13 00:25:56.818: INFO: Pod "downwardapi-volume-48caa9d3-e3b9-45d0-944a-7d8eab4abbc9" satisfied condition "Succeeded or Failed" Mar 13 00:25:56.821: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-48caa9d3-e3b9-45d0-944a-7d8eab4abbc9 container client-container: STEP: delete the pod Mar 13 00:25:56.901: INFO: Waiting for pod downwardapi-volume-48caa9d3-e3b9-45d0-944a-7d8eab4abbc9 to disappear Mar 13 00:25:56.903: INFO: Pod downwardapi-volume-48caa9d3-e3b9-45d0-944a-7d8eab4abbc9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:25:56.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7448" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3237,"failed":0} ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:25:56.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:25:57.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-116" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":194,"skipped":3237,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:25:57.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 13 00:25:57.089: INFO: Waiting up to 5m0s for pod "pod-ca9ed70e-feb5-43f5-b10e-33982b47c38e" in namespace "emptydir-4783" to be "Succeeded or Failed" Mar 13 00:25:57.093: INFO: Pod "pod-ca9ed70e-feb5-43f5-b10e-33982b47c38e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282265ms Mar 13 00:25:59.097: INFO: Pod "pod-ca9ed70e-feb5-43f5-b10e-33982b47c38e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008060298s STEP: Saw pod success Mar 13 00:25:59.097: INFO: Pod "pod-ca9ed70e-feb5-43f5-b10e-33982b47c38e" satisfied condition "Succeeded or Failed" Mar 13 00:25:59.100: INFO: Trying to get logs from node latest-worker pod pod-ca9ed70e-feb5-43f5-b10e-33982b47c38e container test-container: STEP: delete the pod Mar 13 00:25:59.137: INFO: Waiting for pod pod-ca9ed70e-feb5-43f5-b10e-33982b47c38e to disappear Mar 13 00:25:59.147: INFO: Pod pod-ca9ed70e-feb5-43f5-b10e-33982b47c38e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:25:59.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4783" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3255,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:25:59.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 13 00:25:59.772: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 13 00:26:02.806: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:26:02.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5345-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:26:03.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2295" for this suite. STEP: Destroying namespace "webhook-2295-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":196,"skipped":3263,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:26:04.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-d7a1ceda-7a71-43cd-ab0d-a0e2d1c86f84 STEP: Creating a pod to test consume secrets Mar 13 00:26:04.108: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1f4244d8-d097-4d97-8413-2b663564b620" in namespace "projected-6970" to be "Succeeded or Failed" Mar 13 00:26:04.140: INFO: Pod "pod-projected-secrets-1f4244d8-d097-4d97-8413-2b663564b620": Phase="Pending", Reason="", readiness=false. Elapsed: 31.782493ms Mar 13 00:26:06.143: INFO: Pod "pod-projected-secrets-1f4244d8-d097-4d97-8413-2b663564b620": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.035429006s STEP: Saw pod success Mar 13 00:26:06.143: INFO: Pod "pod-projected-secrets-1f4244d8-d097-4d97-8413-2b663564b620" satisfied condition "Succeeded or Failed" Mar 13 00:26:06.146: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-1f4244d8-d097-4d97-8413-2b663564b620 container projected-secret-volume-test: STEP: delete the pod Mar 13 00:26:06.183: INFO: Waiting for pod pod-projected-secrets-1f4244d8-d097-4d97-8413-2b663564b620 to disappear Mar 13 00:26:06.190: INFO: Pod pod-projected-secrets-1f4244d8-d097-4d97-8413-2b663564b620 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:26:06.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6970" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3271,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:26:06.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:26:12.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8816" for this suite. STEP: Destroying namespace "nsdeletetest-1400" for this suite. Mar 13 00:26:12.445: INFO: Namespace nsdeletetest-1400 was already deleted STEP: Destroying namespace "nsdeletetest-8707" for this suite. • [SLOW TEST:6.251 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":198,"skipped":3283,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:26:12.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 13 00:26:15.063: INFO: Successfully updated pod "annotationupdate1b7cad0b-919e-48b3-be7f-5dd79ebd27ad" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:26:17.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-122" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:26:17.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-d3fb12c8-6199-40dc-acd7-15c10aa277cc STEP: Creating a pod to test consume configMaps Mar 13 00:26:17.142: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5a34bcc0-62eb-44d6-9128-76d491145694" in namespace "projected-8003" to be "Succeeded or Failed" Mar 13 00:26:17.177: INFO: Pod "pod-projected-configmaps-5a34bcc0-62eb-44d6-9128-76d491145694": Phase="Pending", Reason="", readiness=false. Elapsed: 34.993992ms Mar 13 00:26:19.181: INFO: Pod "pod-projected-configmaps-5a34bcc0-62eb-44d6-9128-76d491145694": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.039057749s STEP: Saw pod success Mar 13 00:26:19.181: INFO: Pod "pod-projected-configmaps-5a34bcc0-62eb-44d6-9128-76d491145694" satisfied condition "Succeeded or Failed" Mar 13 00:26:19.184: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-5a34bcc0-62eb-44d6-9128-76d491145694 container projected-configmap-volume-test: STEP: delete the pod Mar 13 00:26:19.207: INFO: Waiting for pod pod-projected-configmaps-5a34bcc0-62eb-44d6-9128-76d491145694 to disappear Mar 13 00:26:19.210: INFO: Pod pod-projected-configmaps-5a34bcc0-62eb-44d6-9128-76d491145694 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:26:19.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8003" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3311,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:26:19.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 13 00:26:19.834: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 13 00:26:21.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655979, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655979, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655979, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719655979, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 13 00:26:24.862: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:26:24.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:26:26.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3671" for this suite. STEP: Destroying namespace "webhook-3671-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.955 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":201,"skipped":3318,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:26:26.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 13 00:26:26.286: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 13 00:26:31.295: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:26:31.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1024" for this suite. • [SLOW TEST:5.255 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":202,"skipped":3356,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:26:31.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-8efd6586-6ba8-47c9-9b8a-6aca46f7445a STEP: Creating a pod to test consume configMaps Mar 13 00:26:31.710: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b39cbb21-b13b-45be-9b2e-c9132b2ef220" in namespace "projected-8603" to be "Succeeded or Failed" Mar 13 00:26:31.786: INFO: Pod "pod-projected-configmaps-b39cbb21-b13b-45be-9b2e-c9132b2ef220": Phase="Pending", Reason="", readiness=false. Elapsed: 76.052393ms Mar 13 00:26:33.789: INFO: Pod "pod-projected-configmaps-b39cbb21-b13b-45be-9b2e-c9132b2ef220": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.078497133s STEP: Saw pod success Mar 13 00:26:33.789: INFO: Pod "pod-projected-configmaps-b39cbb21-b13b-45be-9b2e-c9132b2ef220" satisfied condition "Succeeded or Failed" Mar 13 00:26:33.790: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-b39cbb21-b13b-45be-9b2e-c9132b2ef220 container projected-configmap-volume-test: STEP: delete the pod Mar 13 00:26:33.810: INFO: Waiting for pod pod-projected-configmaps-b39cbb21-b13b-45be-9b2e-c9132b2ef220 to disappear Mar 13 00:26:33.816: INFO: Pod pod-projected-configmaps-b39cbb21-b13b-45be-9b2e-c9132b2ef220 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:26:33.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8603" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3358,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:26:33.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:26:33.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8831" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":204,"skipped":3388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:26:33.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 13 00:26:33.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1448' Mar 13 00:26:35.850: INFO: stderr: "" Mar 13 00:26:35.850: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Mar 13 00:26:35.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1448' Mar 13 00:26:52.495: INFO: stderr: "" Mar 13 00:26:52.495: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:26:52.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1448" for this suite. • [SLOW TEST:18.598 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":205,"skipped":3435,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:26:52.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6891 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6891 I0313 00:26:52.620957 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6891, replica count: 2 I0313 00:26:55.671443 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 13 00:26:55.671: INFO: Creating new exec pod Mar 13 00:26:58.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6891 execpod6dbkq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 13 00:26:58.947: INFO: stderr: "I0313 00:26:58.877536 2516 log.go:172] (0xc0003c78c0) (0xc00069b4a0) Create stream\nI0313 00:26:58.877582 2516 log.go:172] (0xc0003c78c0) (0xc00069b4a0) Stream added, broadcasting: 1\nI0313 00:26:58.883622 2516 log.go:172] (0xc0003c78c0) Reply frame received for 1\nI0313 00:26:58.883662 2516 log.go:172] (0xc0003c78c0) (0xc00090a000) Create stream\nI0313 00:26:58.883671 2516 log.go:172] (0xc0003c78c0) (0xc00090a000) Stream added, broadcasting: 3\nI0313 00:26:58.886805 2516 log.go:172] (0xc0003c78c0) Reply frame received for 3\nI0313 00:26:58.886834 2516 log.go:172] (0xc0003c78c0) (0xc00069b540) Create stream\nI0313 00:26:58.886842 2516 log.go:172] (0xc0003c78c0) (0xc00069b540) Stream added, broadcasting: 5\nI0313 00:26:58.887825 2516 log.go:172] (0xc0003c78c0) Reply frame received for 5\nI0313 00:26:58.942613 2516 log.go:172] (0xc0003c78c0) Data frame received for 3\nI0313 00:26:58.942647 2516 log.go:172] (0xc00090a000) (3) Data frame handling\nI0313 00:26:58.942669 2516 log.go:172] (0xc0003c78c0) Data frame received for 5\nI0313 00:26:58.942678 2516 log.go:172] (0xc00069b540) (5) Data frame handling\nI0313 00:26:58.942690 2516 log.go:172] (0xc00069b540) (5) Data frame sent\nI0313 00:26:58.942699 2516 log.go:172] (0xc0003c78c0) Data frame received for 5\nI0313 00:26:58.942705 2516 log.go:172] (0xc00069b540) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0313 00:26:58.944107 2516 log.go:172] (0xc0003c78c0) Data frame received for 1\nI0313 00:26:58.944120 2516 log.go:172] (0xc00069b4a0) (1) Data frame handling\nI0313 00:26:58.944133 2516 log.go:172] (0xc00069b4a0) (1) Data frame sent\nI0313 00:26:58.944220 2516 log.go:172] (0xc0003c78c0) (0xc00069b4a0) Stream removed, broadcasting: 1\nI0313 00:26:58.944252 2516 log.go:172] (0xc0003c78c0) Go away received\nI0313 00:26:58.944638 2516 log.go:172] (0xc0003c78c0) (0xc00069b4a0) Stream removed, broadcasting: 1\nI0313 00:26:58.944657 2516 log.go:172] (0xc0003c78c0) (0xc00090a000) Stream removed, broadcasting: 3\nI0313 00:26:58.944665 2516 log.go:172] (0xc0003c78c0) (0xc00069b540) Stream removed, broadcasting: 5\n" Mar 13 00:26:58.947: INFO: stdout: "" Mar 13 00:26:58.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-6891 execpod6dbkq -- /bin/sh -x -c nc -zv -t -w 2 10.96.78.53 80' Mar 13 00:26:59.117: INFO: stderr: "I0313 00:26:59.050754 2539 log.go:172] (0xc000aed600) (0xc000a10820) Create stream\nI0313 00:26:59.050792 2539 log.go:172] (0xc000aed600) (0xc000a10820) Stream added, broadcasting: 1\nI0313 00:26:59.053954 2539 log.go:172] (0xc000aed600) Reply frame received for 1\nI0313 00:26:59.053982 2539 log.go:172] (0xc000aed600) (0xc00060f680) Create stream\nI0313 00:26:59.053988 2539 log.go:172] (0xc000aed600) (0xc00060f680) Stream added, broadcasting: 3\nI0313 00:26:59.054766 2539 log.go:172] (0xc000aed600) Reply frame received for 3\nI0313 00:26:59.054813 2539 log.go:172] (0xc000aed600) (0xc0004f4aa0) Create stream\nI0313 00:26:59.054826 2539 log.go:172] (0xc000aed600) (0xc0004f4aa0) Stream added, broadcasting: 5\nI0313 00:26:59.055651 2539 log.go:172] (0xc000aed600) Reply frame received for 5\nI0313 00:26:59.107501 2539 log.go:172] (0xc000aed600) Data frame received for 5\nI0313 00:26:59.107535 2539 log.go:172] (0xc0004f4aa0) (5) Data frame handling\nI0313 00:26:59.107562 2539 log.go:172] (0xc0004f4aa0) (5) Data frame sent\nI0313 00:26:59.107580 2539 log.go:172] (0xc000aed600) Data frame received for 5\nI0313 00:26:59.107592 2539 log.go:172] (0xc0004f4aa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.78.53 80\nConnection to 10.96.78.53 80 port [tcp/http] succeeded!\nI0313 00:26:59.107869 2539 log.go:172] (0xc000aed600) Data frame received for 3\nI0313 00:26:59.107886 2539 log.go:172] (0xc00060f680) (3) Data frame handling\nI0313 00:26:59.109318 2539 log.go:172] (0xc000aed600) Data frame received for 1\nI0313 00:26:59.109332 2539 log.go:172] (0xc000a10820) (1) Data frame handling\nI0313 00:26:59.109342 2539 log.go:172] (0xc000a10820) (1) Data frame sent\nI0313 00:26:59.109354 2539 log.go:172] (0xc000aed600) (0xc000a10820) Stream removed, broadcasting: 1\nI0313 00:26:59.109447 2539 log.go:172] (0xc000aed600) Go away received\nI0313 00:26:59.109632 2539 log.go:172] (0xc000aed600) (0xc000a10820) Stream removed, broadcasting: 1\nI0313 00:26:59.109646 2539 log.go:172] (0xc000aed600) (0xc00060f680) Stream removed, broadcasting: 3\nI0313 00:26:59.109653 2539 log.go:172] (0xc000aed600) (0xc0004f4aa0) Stream removed, broadcasting: 5\n" Mar 13 00:26:59.117: INFO: stdout: "" Mar 13 00:26:59.117: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:26:59.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6891" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:6.695 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":206,"skipped":3436,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:26:59.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 13 00:26:59.286: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-731' Mar 13 00:26:59.534: INFO: stderr: "" Mar 13 00:26:59.534: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 13 00:26:59.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-731' Mar 13 00:26:59.625: INFO: stderr: "" Mar 13 00:26:59.625: INFO: stdout: "update-demo-nautilus-57fh4 update-demo-nautilus-9w45k " Mar 13 00:26:59.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57fh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-731' Mar 13 00:26:59.692: INFO: stderr: "" Mar 13 00:26:59.692: INFO: stdout: "" Mar 13 00:26:59.692: INFO: update-demo-nautilus-57fh4 is created but not running Mar 13 00:27:04.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-731' Mar 13 00:27:04.757: INFO: stderr: "" Mar 13 00:27:04.757: INFO: stdout: "update-demo-nautilus-57fh4 update-demo-nautilus-9w45k " Mar 13 00:27:04.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57fh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-731' Mar 13 00:27:04.816: INFO: stderr: "" Mar 13 00:27:04.816: INFO: stdout: "true" Mar 13 00:27:04.816: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57fh4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-731' Mar 13 00:27:04.884: INFO: stderr: "" Mar 13 00:27:04.884: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 00:27:04.884: INFO: validating pod update-demo-nautilus-57fh4 Mar 13 00:27:04.887: INFO: got data: { "image": "nautilus.jpg" } Mar 13 00:27:04.887: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 00:27:04.887: INFO: update-demo-nautilus-57fh4 is verified up and running Mar 13 00:27:04.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9w45k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-731' Mar 13 00:27:04.949: INFO: stderr: "" Mar 13 00:27:04.949: INFO: stdout: "true" Mar 13 00:27:04.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9w45k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-731' Mar 13 00:27:05.015: INFO: stderr: "" Mar 13 00:27:05.015: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 00:27:05.015: INFO: validating pod update-demo-nautilus-9w45k Mar 13 00:27:05.018: INFO: got data: { "image": "nautilus.jpg" } Mar 13 00:27:05.018: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 00:27:05.018: INFO: update-demo-nautilus-9w45k is verified up and running STEP: scaling down the replication controller Mar 13 00:27:05.020: INFO: scanned /root for discovery docs: Mar 13 00:27:05.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-731' Mar 13 00:27:06.099: INFO: stderr: "" Mar 13 00:27:06.099: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 13 00:27:06.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-731' Mar 13 00:27:06.212: INFO: stderr: "" Mar 13 00:27:06.212: INFO: stdout: "update-demo-nautilus-57fh4 update-demo-nautilus-9w45k " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 13 00:27:11.212: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-731' Mar 13 00:27:11.329: INFO: stderr: "" Mar 13 00:27:11.329: INFO: stdout: "update-demo-nautilus-57fh4 update-demo-nautilus-9w45k " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 13 00:27:16.329: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-731' Mar 13 00:27:16.409: INFO: stderr: "" Mar 13 00:27:16.409: INFO: stdout: "update-demo-nautilus-9w45k " Mar 13 00:27:16.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9w45k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-731' Mar 13 00:27:16.487: INFO: stderr: "" Mar 13 00:27:16.487: INFO: stdout: "true" Mar 13 00:27:16.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9w45k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-731' Mar 13 00:27:16.555: INFO: stderr: "" Mar 13 00:27:16.555: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 00:27:16.555: INFO: validating pod update-demo-nautilus-9w45k Mar 13 00:27:16.558: INFO: got data: { "image": "nautilus.jpg" } Mar 13 00:27:16.558: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 00:27:16.558: INFO: update-demo-nautilus-9w45k is verified up and running STEP: scaling up the replication controller Mar 13 00:27:16.560: INFO: scanned /root for discovery docs: Mar 13 00:27:16.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-731' Mar 13 00:27:17.693: INFO: stderr: "" Mar 13 00:27:17.693: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 13 00:27:17.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-731' Mar 13 00:27:17.774: INFO: stderr: "" Mar 13 00:27:17.774: INFO: stdout: "update-demo-nautilus-9mj6q update-demo-nautilus-9w45k " Mar 13 00:27:17.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9mj6q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-731' Mar 13 00:27:17.856: INFO: stderr: "" Mar 13 00:27:17.856: INFO: stdout: "" Mar 13 00:27:17.856: INFO: update-demo-nautilus-9mj6q is created but not running Mar 13 00:27:22.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-731' Mar 13 00:27:22.956: INFO: stderr: "" Mar 13 00:27:22.956: INFO: stdout: "update-demo-nautilus-9mj6q update-demo-nautilus-9w45k " Mar 13 00:27:22.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9mj6q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-731' Mar 13 00:27:23.045: INFO: stderr: "" Mar 13 00:27:23.045: INFO: stdout: "true" Mar 13 00:27:23.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9mj6q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-731' Mar 13 00:27:23.129: INFO: stderr: "" Mar 13 00:27:23.129: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 00:27:23.129: INFO: validating pod update-demo-nautilus-9mj6q Mar 13 00:27:23.131: INFO: got data: { "image": "nautilus.jpg" } Mar 13 00:27:23.131: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 00:27:23.131: INFO: update-demo-nautilus-9mj6q is verified up and running Mar 13 00:27:23.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9w45k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-731' Mar 13 00:27:23.202: INFO: stderr: "" Mar 13 00:27:23.202: INFO: stdout: "true" Mar 13 00:27:23.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9w45k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-731' Mar 13 00:27:23.275: INFO: stderr: "" Mar 13 00:27:23.275: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 13 00:27:23.275: INFO: validating pod update-demo-nautilus-9w45k Mar 13 00:27:23.277: INFO: got data: { "image": "nautilus.jpg" } Mar 13 00:27:23.277: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 13 00:27:23.277: INFO: update-demo-nautilus-9w45k is verified up and running STEP: using delete to clean up resources Mar 13 00:27:23.277: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-731' Mar 13 00:27:23.346: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 13 00:27:23.346: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 13 00:27:23.346: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-731' Mar 13 00:27:23.415: INFO: stderr: "No resources found in kubectl-731 namespace.\n" Mar 13 00:27:23.415: INFO: stdout: "" Mar 13 00:27:23.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-731 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 13 00:27:23.480: INFO: stderr: "" Mar 13 00:27:23.480: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:27:23.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-731" for this suite. • [SLOW TEST:24.286 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":207,"skipped":3501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:27:23.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-479826fb-abed-46a0-adba-eb85eef9280a STEP: Creating a pod to test consume configMaps Mar 13 00:27:23.543: INFO: Waiting up to 5m0s for pod "pod-configmaps-9acb1cde-f072-4c68-86b1-cbb5f39b30a1" in namespace "configmap-4442" to be "Succeeded or Failed" Mar 13 00:27:23.590: INFO: Pod "pod-configmaps-9acb1cde-f072-4c68-86b1-cbb5f39b30a1": Phase="Pending", Reason="", readiness=false. Elapsed: 47.119971ms Mar 13 00:27:25.594: INFO: Pod "pod-configmaps-9acb1cde-f072-4c68-86b1-cbb5f39b30a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051024509s Mar 13 00:27:27.598: INFO: Pod "pod-configmaps-9acb1cde-f072-4c68-86b1-cbb5f39b30a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054705939s STEP: Saw pod success Mar 13 00:27:27.598: INFO: Pod "pod-configmaps-9acb1cde-f072-4c68-86b1-cbb5f39b30a1" satisfied condition "Succeeded or Failed" Mar 13 00:27:27.600: INFO: Trying to get logs from node latest-worker pod pod-configmaps-9acb1cde-f072-4c68-86b1-cbb5f39b30a1 container configmap-volume-test: STEP: delete the pod Mar 13 00:27:27.642: INFO: Waiting for pod pod-configmaps-9acb1cde-f072-4c68-86b1-cbb5f39b30a1 to disappear Mar 13 00:27:27.650: INFO: Pod pod-configmaps-9acb1cde-f072-4c68-86b1-cbb5f39b30a1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:27:27.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4442" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3553,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:27:27.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Mar 13 00:27:27.772: INFO: Waiting up to 5m0s for pod "client-containers-ca4b3eed-aac3-4d80-91b6-dc7c5125eef4" in namespace "containers-3218" to be "Succeeded or Failed" Mar 13 00:27:27.779: INFO: Pod "client-containers-ca4b3eed-aac3-4d80-91b6-dc7c5125eef4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.855755ms Mar 13 00:27:29.783: INFO: Pod "client-containers-ca4b3eed-aac3-4d80-91b6-dc7c5125eef4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010666955s STEP: Saw pod success Mar 13 00:27:29.783: INFO: Pod "client-containers-ca4b3eed-aac3-4d80-91b6-dc7c5125eef4" satisfied condition "Succeeded or Failed" Mar 13 00:27:29.785: INFO: Trying to get logs from node latest-worker pod client-containers-ca4b3eed-aac3-4d80-91b6-dc7c5125eef4 container test-container: STEP: delete the pod Mar 13 00:27:29.837: INFO: Waiting for pod client-containers-ca4b3eed-aac3-4d80-91b6-dc7c5125eef4 to disappear Mar 13 00:27:29.839: INFO: Pod client-containers-ca4b3eed-aac3-4d80-91b6-dc7c5125eef4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:27:29.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3218" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":209,"skipped":3559,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:27:29.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 13 00:27:29.901: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1802 /api/v1/namespaces/watch-1802/configmaps/e2e-watch-test-watch-closed 507957ff-ba04-49ed-a2ee-5932c7430a31 1225671 0 2020-03-13 00:27:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 13 00:27:29.901: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1802 /api/v1/namespaces/watch-1802/configmaps/e2e-watch-test-watch-closed 507957ff-ba04-49ed-a2ee-5932c7430a31 1225672 0 2020-03-13 00:27:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 13 00:27:29.933: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1802 /api/v1/namespaces/watch-1802/configmaps/e2e-watch-test-watch-closed 507957ff-ba04-49ed-a2ee-5932c7430a31 1225673 0 2020-03-13 00:27:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 13 00:27:29.933: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1802 /api/v1/namespaces/watch-1802/configmaps/e2e-watch-test-watch-closed 507957ff-ba04-49ed-a2ee-5932c7430a31 1225674 0 2020-03-13 00:27:29 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:27:29.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1802" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":210,"skipped":3593,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:27:29.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-dfb56268-597e-43b8-9e73-5778573ef33e STEP: Creating a pod to test consume configMaps Mar 13 00:27:30.037: INFO: Waiting up to 5m0s for pod "pod-configmaps-85d0beba-2ea9-4a3a-a614-d1f339e15ad8" in namespace "configmap-26" to be "Succeeded or Failed" Mar 13 00:27:30.043: INFO: Pod "pod-configmaps-85d0beba-2ea9-4a3a-a614-d1f339e15ad8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.307996ms Mar 13 00:27:32.081: INFO: Pod "pod-configmaps-85d0beba-2ea9-4a3a-a614-d1f339e15ad8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.044393361s STEP: Saw pod success Mar 13 00:27:32.082: INFO: Pod "pod-configmaps-85d0beba-2ea9-4a3a-a614-d1f339e15ad8" satisfied condition "Succeeded or Failed" Mar 13 00:27:32.084: INFO: Trying to get logs from node latest-worker pod pod-configmaps-85d0beba-2ea9-4a3a-a614-d1f339e15ad8 container configmap-volume-test: STEP: delete the pod Mar 13 00:27:32.207: INFO: Waiting for pod pod-configmaps-85d0beba-2ea9-4a3a-a614-d1f339e15ad8 to disappear Mar 13 00:27:32.229: INFO: Pod pod-configmaps-85d0beba-2ea9-4a3a-a614-d1f339e15ad8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:27:32.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-26" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3605,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:27:32.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 13 00:27:32.361: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12f92e92-1c0b-401c-9aa7-544f9dc65927" in namespace "projected-4431" to be "Succeeded or Failed" Mar 13 00:27:32.366: INFO: Pod "downwardapi-volume-12f92e92-1c0b-401c-9aa7-544f9dc65927": Phase="Pending", Reason="", readiness=false. Elapsed: 4.962871ms Mar 13 00:27:34.369: INFO: Pod "downwardapi-volume-12f92e92-1c0b-401c-9aa7-544f9dc65927": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007560704s Mar 13 00:27:36.372: INFO: Pod "downwardapi-volume-12f92e92-1c0b-401c-9aa7-544f9dc65927": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010799801s STEP: Saw pod success Mar 13 00:27:36.372: INFO: Pod "downwardapi-volume-12f92e92-1c0b-401c-9aa7-544f9dc65927" satisfied condition "Succeeded or Failed" Mar 13 00:27:36.374: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-12f92e92-1c0b-401c-9aa7-544f9dc65927 container client-container: STEP: delete the pod Mar 13 00:27:36.403: INFO: Waiting for pod downwardapi-volume-12f92e92-1c0b-401c-9aa7-544f9dc65927 to disappear Mar 13 00:27:36.411: INFO: Pod downwardapi-volume-12f92e92-1c0b-401c-9aa7-544f9dc65927 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:27:36.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4431" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3605,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:27:36.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-579 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-579 STEP: creating replication controller externalsvc in namespace services-579 I0313 00:27:36.603199 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-579, replica count: 2 I0313 00:27:39.653730 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 13 00:27:39.718: INFO: Creating new exec pod Mar 13 00:27:41.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-579 execpodwl5qp -- /bin/sh -x -c nslookup clusterip-service' Mar 13 00:27:41.948: INFO: stderr: "I0313 00:27:41.867954 3070 log.go:172] (0xc000b61550) (0xc0009a0a00) Create stream\nI0313 00:27:41.867995 3070 log.go:172] (0xc000b61550) (0xc0009a0a00) Stream added, broadcasting: 1\nI0313 00:27:41.871614 3070 log.go:172] (0xc000b61550) Reply frame received for 1\nI0313 00:27:41.871649 3070 log.go:172] (0xc000b61550) (0xc00065d680) Create stream\nI0313 00:27:41.871657 3070 log.go:172] (0xc000b61550) (0xc00065d680) Stream added, broadcasting: 3\nI0313 00:27:41.872300 3070 log.go:172] (0xc000b61550) Reply frame received for 3\nI0313 00:27:41.872326 3070 log.go:172] (0xc000b61550) (0xc00055caa0) Create stream\nI0313 00:27:41.872333 3070 log.go:172] (0xc000b61550) (0xc00055caa0) Stream added, broadcasting: 5\nI0313 00:27:41.872960 3070 log.go:172] (0xc000b61550) Reply frame received for 5\nI0313 00:27:41.936330 3070 log.go:172] (0xc000b61550) Data frame received for 5\nI0313 00:27:41.936353 3070 log.go:172] (0xc00055caa0) (5) Data frame handling\nI0313 00:27:41.936367 3070 log.go:172] (0xc00055caa0) (5) Data frame sent\n+ nslookup clusterip-service\nI0313 00:27:41.941302 3070 log.go:172] (0xc000b61550) Data frame received for 3\nI0313 00:27:41.941320 3070 log.go:172] (0xc00065d680) (3) Data frame handling\nI0313 00:27:41.941334 3070 log.go:172] (0xc00065d680) (3) Data frame sent\nI0313 00:27:41.942723 3070 log.go:172] (0xc000b61550) Data frame received for 3\nI0313 00:27:41.942739 3070 log.go:172] (0xc00065d680) (3) Data frame handling\nI0313 00:27:41.942752 3070 log.go:172] (0xc00065d680) (3) Data frame sent\nI0313 00:27:41.943169 3070 log.go:172] (0xc000b61550) Data frame received for 5\nI0313 00:27:41.943199 3070 log.go:172] (0xc000b61550) Data frame received for 3\nI0313 00:27:41.943216 3070 log.go:172] (0xc00065d680) (3) Data frame handling\nI0313 00:27:41.943236 3070 log.go:172] (0xc00055caa0) (5) Data frame handling\nI0313 00:27:41.945150 3070 log.go:172] (0xc000b61550) Data frame received for 1\nI0313 00:27:41.945167 3070 log.go:172] (0xc0009a0a00) (1) Data frame handling\nI0313 00:27:41.945176 3070 log.go:172] (0xc0009a0a00) (1) Data frame sent\nI0313 00:27:41.945188 3070 log.go:172] (0xc000b61550) (0xc0009a0a00) Stream removed, broadcasting: 1\nI0313 00:27:41.945202 3070 log.go:172] (0xc000b61550) Go away received\nI0313 00:27:41.945522 3070 log.go:172] (0xc000b61550) (0xc0009a0a00) Stream removed, broadcasting: 1\nI0313 00:27:41.945543 3070 log.go:172] (0xc000b61550) (0xc00065d680) Stream removed, broadcasting: 3\nI0313 00:27:41.945549 3070 log.go:172] (0xc000b61550) (0xc00055caa0) Stream removed, broadcasting: 5\n" Mar 13 00:27:41.948: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-579.svc.cluster.local\tcanonical name = externalsvc.services-579.svc.cluster.local.\nName:\texternalsvc.services-579.svc.cluster.local\nAddress: 10.96.106.135\n\n" STEP: deleting ReplicationController externalsvc in namespace services-579, will wait for the garbage collector to delete the pods Mar 13 00:27:42.006: INFO: Deleting ReplicationController externalsvc took: 4.948839ms Mar 13 00:27:42.306: INFO: Terminating ReplicationController externalsvc pods took: 300.225695ms Mar 13 00:27:52.549: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:27:52.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-579" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:16.200 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":213,"skipped":3631,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:27:52.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 13 00:27:55.733: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:27:55.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5569" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3638,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:27:55.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:27:55.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1350" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":215,"skipped":3678,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:27:55.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 13 00:27:56.845: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:27:56.849: INFO: Number of nodes with available pods: 0 Mar 13 00:27:56.849: INFO: Node latest-worker is running more than one daemon pod Mar 13 00:27:57.867: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:27:57.921: INFO: Number of nodes with available pods: 0 Mar 13 00:27:57.922: INFO: Node latest-worker is running more than one daemon pod Mar 13 00:27:58.855: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:27:58.858: INFO: Number of nodes with available pods: 2 Mar 13 00:27:58.858: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 13 00:27:58.872: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:27:58.889: INFO: Number of nodes with available pods: 1 Mar 13 00:27:58.889: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:27:59.893: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:27:59.896: INFO: Number of nodes with available pods: 1 Mar 13 00:27:59.896: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:28:00.893: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:28:00.901: INFO: Number of nodes with available pods: 2 Mar 13 00:28:00.901: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6008, will wait for the garbage collector to delete the pods Mar 13 00:28:00.993: INFO: Deleting DaemonSet.extensions daemon-set took: 12.100582ms Mar 13 00:28:01.294: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.296921ms Mar 13 00:28:04.396: INFO: Number of nodes with available pods: 0 Mar 13 00:28:04.396: INFO: Number of running nodes: 0, number of available pods: 0 Mar 13 00:28:04.398: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6008/daemonsets","resourceVersion":"1226033"},"items":null} Mar 13 00:28:04.400: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6008/pods","resourceVersion":"1226033"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:28:04.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6008" for this suite. • [SLOW TEST:8.509 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":216,"skipped":3685,"failed":0} S ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:28:04.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8113.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8113.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8113.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8113.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 13 00:28:08.526: INFO: DNS probes using dns-test-fbffd529-d012-430f-a0c0-596ff0080b66 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8113.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8113.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8113.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8113.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 13 00:28:12.620: INFO: File wheezy_udp@dns-test-service-3.dns-8113.svc.cluster.local from pod dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 00:28:12.624: INFO: File jessie_udp@dns-test-service-3.dns-8113.svc.cluster.local from pod dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 00:28:12.624: INFO: Lookups using dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da failed for: [wheezy_udp@dns-test-service-3.dns-8113.svc.cluster.local jessie_udp@dns-test-service-3.dns-8113.svc.cluster.local] Mar 13 00:28:17.628: INFO: File wheezy_udp@dns-test-service-3.dns-8113.svc.cluster.local from pod dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 00:28:17.632: INFO: File jessie_udp@dns-test-service-3.dns-8113.svc.cluster.local from pod dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 00:28:17.633: INFO: Lookups using dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da failed for: [wheezy_udp@dns-test-service-3.dns-8113.svc.cluster.local jessie_udp@dns-test-service-3.dns-8113.svc.cluster.local] Mar 13 00:28:22.629: INFO: File wheezy_udp@dns-test-service-3.dns-8113.svc.cluster.local from pod dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 00:28:22.633: INFO: File jessie_udp@dns-test-service-3.dns-8113.svc.cluster.local from pod dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 00:28:22.633: INFO: Lookups using dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da failed for: [wheezy_udp@dns-test-service-3.dns-8113.svc.cluster.local jessie_udp@dns-test-service-3.dns-8113.svc.cluster.local] Mar 13 00:28:27.628: INFO: File wheezy_udp@dns-test-service-3.dns-8113.svc.cluster.local from pod dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 00:28:27.631: INFO: File jessie_udp@dns-test-service-3.dns-8113.svc.cluster.local from pod dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 00:28:27.631: INFO: Lookups using dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da failed for: [wheezy_udp@dns-test-service-3.dns-8113.svc.cluster.local jessie_udp@dns-test-service-3.dns-8113.svc.cluster.local] Mar 13 00:28:32.628: INFO: File wheezy_udp@dns-test-service-3.dns-8113.svc.cluster.local from pod dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 00:28:32.632: INFO: File jessie_udp@dns-test-service-3.dns-8113.svc.cluster.local from pod dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 13 00:28:32.632: INFO: Lookups using dns-8113/dns-test-8df36358-e641-452f-95a0-bc448a7d01da failed for: [wheezy_udp@dns-test-service-3.dns-8113.svc.cluster.local jessie_udp@dns-test-service-3.dns-8113.svc.cluster.local] Mar 13 00:28:37.628: INFO: DNS probes using dns-test-8df36358-e641-452f-95a0-bc448a7d01da succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8113.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8113.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8113.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8113.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 13 00:28:41.861: INFO: DNS probes using dns-test-97884b98-39e5-4788-8e24-d8eb3394e506 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:28:41.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8113" for this suite. • [SLOW TEST:37.640 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":217,"skipped":3686,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:28:42.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 13 00:28:42.209: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:28:45.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8100" for this suite. •{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":218,"skipped":3695,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:28:45.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0313 00:28:47.151547 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 13 00:28:47.151: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:28:47.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8401" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":219,"skipped":3697,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:28:47.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-86790a42-2a8c-4bda-9c27-541b47232ad6 Mar 13 00:28:47.280: INFO: Pod name my-hostname-basic-86790a42-2a8c-4bda-9c27-541b47232ad6: Found 0 pods out of 1 Mar 13 00:28:52.306: INFO: Pod name my-hostname-basic-86790a42-2a8c-4bda-9c27-541b47232ad6: Found 1 pods out of 1 Mar 13 00:28:52.306: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-86790a42-2a8c-4bda-9c27-541b47232ad6" are running Mar 13 00:28:52.309: INFO: Pod "my-hostname-basic-86790a42-2a8c-4bda-9c27-541b47232ad6-5l4fp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 00:28:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 00:28:48 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 00:28:48 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 00:28:47 +0000 UTC Reason: Message:}]) Mar 13 00:28:52.309: INFO: Trying to dial the pod Mar 13 00:28:57.322: INFO: Controller my-hostname-basic-86790a42-2a8c-4bda-9c27-541b47232ad6: Got expected result from replica 1 [my-hostname-basic-86790a42-2a8c-4bda-9c27-541b47232ad6-5l4fp]: "my-hostname-basic-86790a42-2a8c-4bda-9c27-541b47232ad6-5l4fp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:28:57.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9591" for this suite. • [SLOW TEST:10.166 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":220,"skipped":3703,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:28:57.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 13 00:28:57.974: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 13 00:28:59.984: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656137, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656137, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656138, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656137, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 13 00:29:03.028: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:29:03.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9136" for this suite. STEP: Destroying namespace "webhook-9136-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.943 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":221,"skipped":3711,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:29:03.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 13 00:29:03.347: INFO: Waiting up to 5m0s for pod "pod-1c1e1c28-9b70-4582-8785-86fded625be7" in namespace "emptydir-549" to be "Succeeded or Failed" Mar 13 00:29:03.369: INFO: Pod "pod-1c1e1c28-9b70-4582-8785-86fded625be7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.421125ms Mar 13 00:29:05.373: INFO: Pod "pod-1c1e1c28-9b70-4582-8785-86fded625be7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026456245s STEP: Saw pod success Mar 13 00:29:05.373: INFO: Pod "pod-1c1e1c28-9b70-4582-8785-86fded625be7" satisfied condition "Succeeded or Failed" Mar 13 00:29:05.375: INFO: Trying to get logs from node latest-worker pod pod-1c1e1c28-9b70-4582-8785-86fded625be7 container test-container: STEP: delete the pod Mar 13 00:29:05.398: INFO: Waiting for pod pod-1c1e1c28-9b70-4582-8785-86fded625be7 to disappear Mar 13 00:29:05.416: INFO: Pod pod-1c1e1c28-9b70-4582-8785-86fded625be7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:29:05.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-549" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3746,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:29:05.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 13 00:29:05.520: INFO: Waiting up to 5m0s for pod "pod-4e280004-523a-49f9-bdca-b92178cbc73f" in namespace "emptydir-9828" to be "Succeeded or Failed" Mar 13 00:29:05.540: INFO: Pod "pod-4e280004-523a-49f9-bdca-b92178cbc73f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.277248ms Mar 13 00:29:07.544: INFO: Pod "pod-4e280004-523a-49f9-bdca-b92178cbc73f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024028452s STEP: Saw pod success Mar 13 00:29:07.544: INFO: Pod "pod-4e280004-523a-49f9-bdca-b92178cbc73f" satisfied condition "Succeeded or Failed" Mar 13 00:29:07.546: INFO: Trying to get logs from node latest-worker pod pod-4e280004-523a-49f9-bdca-b92178cbc73f container test-container: STEP: delete the pod Mar 13 00:29:07.566: INFO: Waiting for pod pod-4e280004-523a-49f9-bdca-b92178cbc73f to disappear Mar 13 00:29:07.570: INFO: Pod pod-4e280004-523a-49f9-bdca-b92178cbc73f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:29:07.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9828" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3750,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:29:07.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-9c04908a-a2e8-4745-b3eb-5dfd121dc61c in namespace container-probe-5686 Mar 13 00:29:09.663: INFO: Started pod liveness-9c04908a-a2e8-4745-b3eb-5dfd121dc61c in namespace container-probe-5686 STEP: checking the pod's current state and verifying that restartCount is present Mar 13 00:29:09.666: INFO: Initial restart count of pod liveness-9c04908a-a2e8-4745-b3eb-5dfd121dc61c is 0 Mar 13 00:29:27.701: INFO: Restart count of pod container-probe-5686/liveness-9c04908a-a2e8-4745-b3eb-5dfd121dc61c is now 1 (18.03499663s elapsed) Mar 13 00:29:47.738: INFO: Restart count of pod container-probe-5686/liveness-9c04908a-a2e8-4745-b3eb-5dfd121dc61c is now 2 (38.071681181s elapsed) Mar 13 00:30:07.775: INFO: Restart count of pod container-probe-5686/liveness-9c04908a-a2e8-4745-b3eb-5dfd121dc61c is now 3 (58.108418613s elapsed) Mar 13 00:30:27.842: INFO: Restart count of pod container-probe-5686/liveness-9c04908a-a2e8-4745-b3eb-5dfd121dc61c is now 4 (1m18.175678798s elapsed) Mar 13 00:31:28.029: INFO: Restart count of pod container-probe-5686/liveness-9c04908a-a2e8-4745-b3eb-5dfd121dc61c is now 5 (2m18.362362445s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:31:28.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5686" for this suite. • [SLOW TEST:140.476 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3754,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:31:28.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-b61f9bb0-a90c-4feb-a80b-d4f4464e75df STEP: Creating a pod to test consume configMaps Mar 13 00:31:28.182: INFO: Waiting up to 5m0s for pod "pod-configmaps-20cf9906-9fe6-4745-844a-7aa67c47eaa2" in namespace "configmap-2113" to be "Succeeded or Failed" Mar 13 00:31:28.192: INFO: Pod "pod-configmaps-20cf9906-9fe6-4745-844a-7aa67c47eaa2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.406037ms Mar 13 00:31:30.200: INFO: Pod "pod-configmaps-20cf9906-9fe6-4745-844a-7aa67c47eaa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017799369s STEP: Saw pod success Mar 13 00:31:30.200: INFO: Pod "pod-configmaps-20cf9906-9fe6-4745-844a-7aa67c47eaa2" satisfied condition "Succeeded or Failed" Mar 13 00:31:30.203: INFO: Trying to get logs from node latest-worker pod pod-configmaps-20cf9906-9fe6-4745-844a-7aa67c47eaa2 container configmap-volume-test: STEP: delete the pod Mar 13 00:31:30.236: INFO: Waiting for pod pod-configmaps-20cf9906-9fe6-4745-844a-7aa67c47eaa2 to disappear Mar 13 00:31:30.240: INFO: Pod pod-configmaps-20cf9906-9fe6-4745-844a-7aa67c47eaa2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:31:30.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2113" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3770,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:31:30.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:31:30.332: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 13 00:31:35.334: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 13 00:31:35.334: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 13 00:31:37.337: INFO: Creating deployment "test-rollover-deployment" Mar 13 00:31:37.352: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 13 00:31:39.357: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 13 00:31:39.361: INFO: Ensure that both replica sets have 1 created replica Mar 13 00:31:39.365: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 13 00:31:39.370: INFO: Updating deployment test-rollover-deployment Mar 13 00:31:39.370: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 13 00:31:41.380: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 13 00:31:41.385: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 13 00:31:41.390: INFO: all replica sets need to contain the pod-template-hash label Mar 13 00:31:41.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656301, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 13 00:31:43.394: INFO: all replica sets need to contain the pod-template-hash label Mar 13 00:31:43.394: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656301, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 13 00:31:45.396: INFO: all replica sets need to contain the pod-template-hash label Mar 13 00:31:45.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656301, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 13 00:31:47.396: INFO: all replica sets need to contain the pod-template-hash label Mar 13 00:31:47.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656301, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 13 00:31:49.406: INFO: all replica sets need to contain the pod-template-hash label Mar 13 00:31:49.406: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656301, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 13 00:31:51.537: INFO: Mar 13 00:31:51.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656311, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656297, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 13 00:31:53.418: INFO: Mar 13 00:31:53.418: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 13 00:31:53.425: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9106 /apis/apps/v1/namespaces/deployment-9106/deployments/test-rollover-deployment 922908be-10e6-4a50-80f9-2ae62a7ea07a 1227230 2 2020-03-13 00:31:37 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034643e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-13 00:31:37 +0000 UTC,LastTransitionTime:2020-03-13 00:31:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-03-13 00:31:51 +0000 UTC,LastTransitionTime:2020-03-13 00:31:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 13 00:31:53.427: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-9106 /apis/apps/v1/namespaces/deployment-9106/replicasets/test-rollover-deployment-78df7bc796 9fd0bafb-6a8f-4a5f-836a-206000427c96 1227218 2 2020-03-13 00:31:39 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 922908be-10e6-4a50-80f9-2ae62a7ea07a 0xc001bc1e97 0xc001bc1e98}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001bc1f88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 13 00:31:53.427: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 13 00:31:53.427: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9106 /apis/apps/v1/namespaces/deployment-9106/replicasets/test-rollover-controller 4e5fe499-2431-4040-8b35-9a19845a200d 1227228 2 2020-03-13 00:31:30 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 922908be-10e6-4a50-80f9-2ae62a7ea07a 0xc001bc1d77 0xc001bc1d78}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001bc1dd8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 13 00:31:53.427: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-9106 /apis/apps/v1/namespaces/deployment-9106/replicasets/test-rollover-deployment-f6c94f66c fd00c220-68dd-4d3e-a4c9-13c280d11ecd 1227174 2 2020-03-13 00:31:37 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 922908be-10e6-4a50-80f9-2ae62a7ea07a 0xc00421c010 0xc00421c011}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00421c088 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 13 00:31:53.430: INFO: Pod "test-rollover-deployment-78df7bc796-7xkgq" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-7xkgq test-rollover-deployment-78df7bc796- deployment-9106 /api/v1/namespaces/deployment-9106/pods/test-rollover-deployment-78df7bc796-7xkgq 88c9a4fe-4dc3-4126-a721-cb76d0315b78 1227186 0 2020-03-13 00:31:39 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 9fd0bafb-6a8f-4a5f-836a-206000427c96 0xc00421c647 0xc00421c648}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zgdf5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zgdf5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zgdf5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:31:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:31:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:31:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-13 00:31:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.63,StartTime:2020-03-13 00:31:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-13 00:31:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://64b7b6b6628aac3d7d1fec5d3365650836cb770d425ca171deed28cef17e582d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:31:53.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9106" for this suite. • [SLOW TEST:23.184 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":226,"skipped":3835,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:31:53.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 13 00:31:54.018: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 13 00:31:57.055: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:31:57.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:31:58.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1138" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 •{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":227,"skipped":3841,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:31:58.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-8420 STEP: creating replication controller nodeport-test in namespace services-8420 I0313 00:31:58.489195 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-8420, replica count: 2 I0313 00:32:01.539667 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 13 00:32:01.539: INFO: Creating new exec pod Mar 13 00:32:04.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-8420 execpodhk5f9 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 13 00:32:04.794: INFO: stderr: "I0313 00:32:04.715638 3090 log.go:172] (0xc0009471e0) (0xc0009ac6e0) Create stream\nI0313 00:32:04.715677 3090 log.go:172] (0xc0009471e0) (0xc0009ac6e0) Stream added, broadcasting: 1\nI0313 00:32:04.718321 3090 log.go:172] (0xc0009471e0) Reply frame received for 1\nI0313 00:32:04.718346 3090 log.go:172] (0xc0009471e0) (0xc0007e7680) Create stream\nI0313 00:32:04.718352 3090 log.go:172] (0xc0009471e0) (0xc0007e7680) Stream added, broadcasting: 3\nI0313 00:32:04.719203 3090 log.go:172] (0xc0009471e0) Reply frame received for 3\nI0313 00:32:04.719224 3090 log.go:172] (0xc0009471e0) (0xc0005a4aa0) Create stream\nI0313 00:32:04.719230 3090 log.go:172] (0xc0009471e0) (0xc0005a4aa0) Stream added, broadcasting: 5\nI0313 00:32:04.719734 3090 log.go:172] (0xc0009471e0) Reply frame received for 5\nI0313 00:32:04.790850 3090 log.go:172] (0xc0009471e0) Data frame received for 5\nI0313 00:32:04.790866 3090 log.go:172] (0xc0005a4aa0) (5) Data frame handling\nI0313 00:32:04.790879 3090 log.go:172] (0xc0005a4aa0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0313 00:32:04.791174 3090 log.go:172] (0xc0009471e0) Data frame received for 5\nI0313 00:32:04.791218 3090 log.go:172] (0xc0005a4aa0) (5) Data frame handling\nI0313 00:32:04.791230 3090 log.go:172] (0xc0005a4aa0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0313 00:32:04.791409 3090 log.go:172] (0xc0009471e0) Data frame received for 3\nI0313 00:32:04.791431 3090 log.go:172] (0xc0007e7680) (3) Data frame handling\nI0313 00:32:04.791452 3090 log.go:172] (0xc0009471e0) Data frame received for 5\nI0313 00:32:04.791466 3090 log.go:172] (0xc0005a4aa0) (5) Data frame handling\nI0313 00:32:04.792308 3090 log.go:172] (0xc0009471e0) Data frame received for 1\nI0313 00:32:04.792321 3090 log.go:172] (0xc0009ac6e0) (1) Data frame handling\nI0313 00:32:04.792329 3090 log.go:172] (0xc0009ac6e0) (1) Data frame sent\nI0313 00:32:04.792338 3090 log.go:172] (0xc0009471e0) (0xc0009ac6e0) Stream removed, broadcasting: 1\nI0313 00:32:04.792575 3090 log.go:172] (0xc0009471e0) (0xc0009ac6e0) Stream removed, broadcasting: 1\nI0313 00:32:04.792589 3090 log.go:172] (0xc0009471e0) (0xc0007e7680) Stream removed, broadcasting: 3\nI0313 00:32:04.792686 3090 log.go:172] (0xc0009471e0) Go away received\nI0313 00:32:04.792704 3090 log.go:172] (0xc0009471e0) (0xc0005a4aa0) Stream removed, broadcasting: 5\n" Mar 13 00:32:04.795: INFO: stdout: "" Mar 13 00:32:04.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-8420 execpodhk5f9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.78.162 80' Mar 13 00:32:04.958: INFO: stderr: "I0313 00:32:04.887390 3110 log.go:172] (0xc000b8a000) (0xc0007f5220) Create stream\nI0313 00:32:04.887421 3110 log.go:172] (0xc000b8a000) (0xc0007f5220) Stream added, broadcasting: 1\nI0313 00:32:04.888320 3110 log.go:172] (0xc000b8a000) Reply frame received for 1\nI0313 00:32:04.888338 3110 log.go:172] (0xc000b8a000) (0xc0007f5400) Create stream\nI0313 00:32:04.888344 3110 log.go:172] (0xc000b8a000) (0xc0007f5400) Stream added, broadcasting: 3\nI0313 00:32:04.888816 3110 log.go:172] (0xc000b8a000) Reply frame received for 3\nI0313 00:32:04.888830 3110 log.go:172] (0xc000b8a000) (0xc0007f54a0) Create stream\nI0313 00:32:04.888841 3110 log.go:172] (0xc000b8a000) (0xc0007f54a0) Stream added, broadcasting: 5\nI0313 00:32:04.889319 3110 log.go:172] (0xc000b8a000) Reply frame received for 5\nI0313 00:32:04.955270 3110 log.go:172] (0xc000b8a000) Data frame received for 3\nI0313 00:32:04.955295 3110 log.go:172] (0xc000b8a000) Data frame received for 5\nI0313 00:32:04.955310 3110 log.go:172] (0xc0007f54a0) (5) Data frame handling\nI0313 00:32:04.955325 3110 log.go:172] (0xc0007f54a0) (5) Data frame sent\nI0313 00:32:04.955334 3110 log.go:172] (0xc000b8a000) Data frame received for 5\nI0313 00:32:04.955339 3110 log.go:172] (0xc0007f54a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.78.162 80\nConnection to 10.96.78.162 80 port [tcp/http] succeeded!\nI0313 00:32:04.955354 3110 log.go:172] (0xc0007f5400) (3) Data frame handling\nI0313 00:32:04.956145 3110 log.go:172] (0xc000b8a000) Data frame received for 1\nI0313 00:32:04.956158 3110 log.go:172] (0xc0007f5220) (1) Data frame handling\nI0313 00:32:04.956163 3110 log.go:172] (0xc0007f5220) (1) Data frame sent\nI0313 00:32:04.956172 3110 log.go:172] (0xc000b8a000) (0xc0007f5220) Stream removed, broadcasting: 1\nI0313 00:32:04.956180 3110 log.go:172] (0xc000b8a000) Go away received\nI0313 00:32:04.956447 3110 log.go:172] (0xc000b8a000) (0xc0007f5220) Stream removed, broadcasting: 1\nI0313 00:32:04.956460 3110 log.go:172] (0xc000b8a000) (0xc0007f5400) Stream removed, broadcasting: 3\nI0313 00:32:04.956467 3110 log.go:172] (0xc000b8a000) (0xc0007f54a0) Stream removed, broadcasting: 5\n" Mar 13 00:32:04.958: INFO: stdout: "" Mar 13 00:32:04.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-8420 execpodhk5f9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.16 30354' Mar 13 00:32:05.129: INFO: stderr: "I0313 00:32:05.063033 3130 log.go:172] (0xc000b76f20) (0xc00095a5a0) Create stream\nI0313 00:32:05.063072 3130 log.go:172] (0xc000b76f20) (0xc00095a5a0) Stream added, broadcasting: 1\nI0313 00:32:05.066336 3130 log.go:172] (0xc000b76f20) Reply frame received for 1\nI0313 00:32:05.066365 3130 log.go:172] (0xc000b76f20) (0xc000679680) Create stream\nI0313 00:32:05.066372 3130 log.go:172] (0xc000b76f20) (0xc000679680) Stream added, broadcasting: 3\nI0313 00:32:05.067003 3130 log.go:172] (0xc000b76f20) Reply frame received for 3\nI0313 00:32:05.067034 3130 log.go:172] (0xc000b76f20) (0xc000478aa0) Create stream\nI0313 00:32:05.067055 3130 log.go:172] (0xc000b76f20) (0xc000478aa0) Stream added, broadcasting: 5\nI0313 00:32:05.067801 3130 log.go:172] (0xc000b76f20) Reply frame received for 5\nI0313 00:32:05.123849 3130 log.go:172] (0xc000b76f20) Data frame received for 5\nI0313 00:32:05.123885 3130 log.go:172] (0xc000478aa0) (5) Data frame handling\nI0313 00:32:05.123893 3130 log.go:172] (0xc000478aa0) (5) Data frame sent\nI0313 00:32:05.123899 3130 log.go:172] (0xc000b76f20) Data frame received for 5\nI0313 00:32:05.123906 3130 log.go:172] (0xc000478aa0) (5) Data frame handling\nI0313 00:32:05.123918 3130 log.go:172] (0xc000b76f20) Data frame received for 3\nI0313 00:32:05.123923 3130 log.go:172] (0xc000679680) (3) Data frame handling\n+ nc -zv -t -w 2 172.17.0.16 30354\nConnection to 172.17.0.16 30354 port [tcp/30354] succeeded!\nI0313 00:32:05.125746 3130 log.go:172] (0xc000b76f20) Data frame received for 1\nI0313 00:32:05.125762 3130 log.go:172] (0xc00095a5a0) (1) Data frame handling\nI0313 00:32:05.125770 3130 log.go:172] (0xc00095a5a0) (1) Data frame sent\nI0313 00:32:05.126259 3130 log.go:172] (0xc000b76f20) (0xc00095a5a0) Stream removed, broadcasting: 1\nI0313 00:32:05.126286 3130 log.go:172] (0xc000b76f20) Go away received\nI0313 00:32:05.126530 3130 log.go:172] (0xc000b76f20) (0xc00095a5a0) Stream removed, broadcasting: 1\nI0313 00:32:05.126545 3130 log.go:172] (0xc000b76f20) (0xc000679680) Stream removed, broadcasting: 3\nI0313 00:32:05.126551 3130 log.go:172] (0xc000b76f20) (0xc000478aa0) Stream removed, broadcasting: 5\n" Mar 13 00:32:05.129: INFO: stdout: "" Mar 13 00:32:05.129: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=services-8420 execpodhk5f9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 30354' Mar 13 00:32:05.286: INFO: stderr: "I0313 00:32:05.217231 3151 log.go:172] (0xc0009c8d10) (0xc000ac2320) Create stream\nI0313 00:32:05.217263 3151 log.go:172] (0xc0009c8d10) (0xc000ac2320) Stream added, broadcasting: 1\nI0313 00:32:05.219141 3151 log.go:172] (0xc0009c8d10) Reply frame received for 1\nI0313 00:32:05.219173 3151 log.go:172] (0xc0009c8d10) (0xc00098c0a0) Create stream\nI0313 00:32:05.219181 3151 log.go:172] (0xc0009c8d10) (0xc00098c0a0) Stream added, broadcasting: 3\nI0313 00:32:05.219769 3151 log.go:172] (0xc0009c8d10) Reply frame received for 3\nI0313 00:32:05.219784 3151 log.go:172] (0xc0009c8d10) (0xc000ac23c0) Create stream\nI0313 00:32:05.219790 3151 log.go:172] (0xc0009c8d10) (0xc000ac23c0) Stream added, broadcasting: 5\nI0313 00:32:05.220454 3151 log.go:172] (0xc0009c8d10) Reply frame received for 5\nI0313 00:32:05.282645 3151 log.go:172] (0xc0009c8d10) Data frame received for 5\nI0313 00:32:05.282668 3151 log.go:172] (0xc000ac23c0) (5) Data frame handling\nI0313 00:32:05.282685 3151 log.go:172] (0xc000ac23c0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.18 30354\nConnection to 172.17.0.18 30354 port [tcp/30354] succeeded!\nI0313 00:32:05.282726 3151 log.go:172] (0xc0009c8d10) Data frame received for 3\nI0313 00:32:05.282738 3151 log.go:172] (0xc00098c0a0) (3) Data frame handling\nI0313 00:32:05.282750 3151 log.go:172] (0xc0009c8d10) Data frame received for 5\nI0313 00:32:05.282757 3151 log.go:172] (0xc000ac23c0) (5) Data frame handling\nI0313 00:32:05.283567 3151 log.go:172] (0xc0009c8d10) Data frame received for 1\nI0313 00:32:05.283591 3151 log.go:172] (0xc000ac2320) (1) Data frame handling\nI0313 00:32:05.283604 3151 log.go:172] (0xc000ac2320) (1) Data frame sent\nI0313 00:32:05.283616 3151 log.go:172] (0xc0009c8d10) (0xc000ac2320) Stream removed, broadcasting: 1\nI0313 00:32:05.283630 3151 log.go:172] (0xc0009c8d10) Go away received\nI0313 00:32:05.283949 3151 log.go:172] (0xc0009c8d10) (0xc000ac2320) Stream removed, broadcasting: 1\nI0313 00:32:05.283962 3151 log.go:172] (0xc0009c8d10) (0xc00098c0a0) Stream removed, broadcasting: 3\nI0313 00:32:05.283970 3151 log.go:172] (0xc0009c8d10) (0xc000ac23c0) Stream removed, broadcasting: 5\n" Mar 13 00:32:05.286: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:32:05.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8420" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:6.933 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":228,"skipped":3856,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:32:05.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2254.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2254.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2254.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2254.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2254.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2254.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 13 00:32:09.479: INFO: DNS probes using dns-2254/dns-test-a644150d-ae1f-482d-bb5d-a21a62b34656 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:32:09.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2254" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":229,"skipped":3887,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:32:09.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:32:16.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3988" for this suite. • [SLOW TEST:7.129 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":230,"skipped":3904,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:32:16.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 13 00:32:16.771: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 13 00:32:16.786: INFO: Waiting for terminating namespaces to be deleted... Mar 13 00:32:16.787: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 13 00:32:16.791: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 13 00:32:16.791: INFO: Container kube-proxy ready: true, restart count 0 Mar 13 00:32:16.791: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 13 00:32:16.791: INFO: Container kindnet-cni ready: true, restart count 0 Mar 13 00:32:16.791: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 13 00:32:16.801: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 13 00:32:16.801: INFO: Container coredns ready: true, restart count 0 Mar 13 00:32:16.801: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 13 00:32:16.801: INFO: Container kindnet-cni ready: true, restart count 0 Mar 13 00:32:16.801: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 13 00:32:16.801: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a0b017fb-204e-4276-9f0a-a8f76bb22551 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-a0b017fb-204e-4276-9f0a-a8f76bb22551 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-a0b017fb-204e-4276-9f0a-a8f76bb22551 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:37:22.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3809" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:306.276 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":231,"skipped":3910,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:37:23.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-ba9533f8-caa2-4b0e-96c2-c014360345cf in namespace container-probe-1755 Mar 13 00:37:25.131: INFO: Started pod busybox-ba9533f8-caa2-4b0e-96c2-c014360345cf in namespace container-probe-1755 STEP: checking the pod's current state and verifying that restartCount is present Mar 13 00:37:25.134: INFO: Initial restart count of pod busybox-ba9533f8-caa2-4b0e-96c2-c014360345cf is 0 Mar 13 00:38:21.239: INFO: Restart count of pod container-probe-1755/busybox-ba9533f8-caa2-4b0e-96c2-c014360345cf is now 1 (56.104454876s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:38:21.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1755" for this suite. • [SLOW TEST:58.268 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":3935,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:38:21.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:38:21.315: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-361 I0313 00:38:21.355747 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-361, replica count: 1 I0313 00:38:22.406170 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0313 00:38:23.406355 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 13 00:38:23.539: INFO: Created: latency-svc-4gwz4 Mar 13 00:38:23.548: INFO: Got endpoints: latency-svc-4gwz4 [41.81518ms] Mar 13 00:38:23.559: INFO: Created: latency-svc-9ml2h Mar 13 00:38:23.566: INFO: Got endpoints: latency-svc-9ml2h [17.705253ms] Mar 13 00:38:23.600: INFO: Created: latency-svc-77t57 Mar 13 00:38:23.624: INFO: Got endpoints: latency-svc-77t57 [75.339147ms] Mar 13 00:38:23.624: INFO: Created: latency-svc-7pt86 Mar 13 00:38:23.631: INFO: Got endpoints: latency-svc-7pt86 [81.506983ms] Mar 13 00:38:23.654: INFO: Created: latency-svc-22vbm Mar 13 00:38:23.661: INFO: Got endpoints: latency-svc-22vbm [111.141687ms] Mar 13 00:38:23.684: INFO: Created: latency-svc-58gj2 Mar 13 00:38:23.691: INFO: Got endpoints: latency-svc-58gj2 [142.468909ms] Mar 13 00:38:23.725: INFO: Created: latency-svc-r5z5b Mar 13 00:38:23.750: INFO: Created: latency-svc-rbjtg Mar 13 00:38:23.750: INFO: Got endpoints: latency-svc-r5z5b [199.725966ms] Mar 13 00:38:23.758: INFO: Got endpoints: latency-svc-rbjtg [208.964633ms] Mar 13 00:38:23.780: INFO: Created: latency-svc-xg7zb Mar 13 00:38:23.793: INFO: Got endpoints: latency-svc-xg7zb [243.864651ms] Mar 13 00:38:23.821: INFO: Created: latency-svc-c29g9 Mar 13 00:38:23.865: INFO: Got endpoints: latency-svc-c29g9 [315.662513ms] Mar 13 00:38:23.885: INFO: Created: latency-svc-lm58v Mar 13 00:38:23.889: INFO: Got endpoints: latency-svc-lm58v [339.855046ms] Mar 13 00:38:23.941: INFO: Created: latency-svc-2znj6 Mar 13 00:38:23.949: INFO: Got endpoints: latency-svc-2znj6 [399.429444ms] Mar 13 00:38:23.989: INFO: Created: latency-svc-k7gnv Mar 13 00:38:23.997: INFO: Got endpoints: latency-svc-k7gnv [447.272406ms] Mar 13 00:38:24.019: INFO: Created: latency-svc-4kqnd Mar 13 00:38:24.028: INFO: Got endpoints: latency-svc-4kqnd [477.894767ms] Mar 13 00:38:24.049: INFO: Created: latency-svc-v6lcv Mar 13 00:38:24.058: INFO: Got endpoints: latency-svc-v6lcv [507.882998ms] Mar 13 00:38:24.085: INFO: Created: latency-svc-bh5pd Mar 13 00:38:24.115: INFO: Got endpoints: latency-svc-bh5pd [564.323249ms] Mar 13 00:38:24.142: INFO: Created: latency-svc-vv4wv Mar 13 00:38:24.164: INFO: Got endpoints: latency-svc-vv4wv [597.794311ms] Mar 13 00:38:24.199: INFO: Created: latency-svc-p5nzg Mar 13 00:38:24.206: INFO: Got endpoints: latency-svc-p5nzg [582.158564ms] Mar 13 00:38:24.248: INFO: Created: latency-svc-2mf2w Mar 13 00:38:24.266: INFO: Got endpoints: latency-svc-2mf2w [635.466972ms] Mar 13 00:38:24.313: INFO: Created: latency-svc-s5wvb Mar 13 00:38:24.332: INFO: Got endpoints: latency-svc-s5wvb [671.102464ms] Mar 13 00:38:24.360: INFO: Created: latency-svc-2kc5g Mar 13 00:38:24.368: INFO: Got endpoints: latency-svc-2kc5g [676.997015ms] Mar 13 00:38:24.392: INFO: Created: latency-svc-p8dgj Mar 13 00:38:24.398: INFO: Got endpoints: latency-svc-p8dgj [648.463506ms] Mar 13 00:38:24.422: INFO: Created: latency-svc-l4rzm Mar 13 00:38:24.434: INFO: Got endpoints: latency-svc-l4rzm [675.893125ms] Mar 13 00:38:24.492: INFO: Created: latency-svc-t5b2s Mar 13 00:38:24.500: INFO: Got endpoints: latency-svc-t5b2s [707.108334ms] Mar 13 00:38:24.523: INFO: Created: latency-svc-krgkp Mar 13 00:38:24.537: INFO: Got endpoints: latency-svc-krgkp [671.87527ms] Mar 13 00:38:24.566: INFO: Created: latency-svc-cnkz4 Mar 13 00:38:24.584: INFO: Got endpoints: latency-svc-cnkz4 [694.97101ms] Mar 13 00:38:24.624: INFO: Created: latency-svc-nv6dz Mar 13 00:38:24.651: INFO: Got endpoints: latency-svc-nv6dz [701.2575ms] Mar 13 00:38:24.651: INFO: Created: latency-svc-8dnp8 Mar 13 00:38:24.662: INFO: Got endpoints: latency-svc-8dnp8 [664.894899ms] Mar 13 00:38:24.691: INFO: Created: latency-svc-j7hfh Mar 13 00:38:24.698: INFO: Got endpoints: latency-svc-j7hfh [669.691554ms] Mar 13 00:38:24.715: INFO: Created: latency-svc-tttqq Mar 13 00:38:24.721: INFO: Got endpoints: latency-svc-tttqq [663.327409ms] Mar 13 00:38:24.773: INFO: Created: latency-svc-jpzjj Mar 13 00:38:24.788: INFO: Created: latency-svc-79f6k Mar 13 00:38:24.788: INFO: Got endpoints: latency-svc-jpzjj [673.851169ms] Mar 13 00:38:24.812: INFO: Got endpoints: latency-svc-79f6k [648.276216ms] Mar 13 00:38:24.835: INFO: Created: latency-svc-x56br Mar 13 00:38:24.841: INFO: Got endpoints: latency-svc-x56br [634.974776ms] Mar 13 00:38:24.860: INFO: Created: latency-svc-pzllt Mar 13 00:38:24.865: INFO: Got endpoints: latency-svc-pzllt [599.204422ms] Mar 13 00:38:24.899: INFO: Created: latency-svc-wcsrz Mar 13 00:38:24.927: INFO: Got endpoints: latency-svc-wcsrz [594.628928ms] Mar 13 00:38:24.927: INFO: Created: latency-svc-crk22 Mar 13 00:38:24.944: INFO: Got endpoints: latency-svc-crk22 [575.799458ms] Mar 13 00:38:24.975: INFO: Created: latency-svc-pg6l9 Mar 13 00:38:25.031: INFO: Got endpoints: latency-svc-pg6l9 [632.674739ms] Mar 13 00:38:25.045: INFO: Created: latency-svc-gkjsf Mar 13 00:38:25.052: INFO: Got endpoints: latency-svc-gkjsf [617.94625ms] Mar 13 00:38:25.076: INFO: Created: latency-svc-2k5j9 Mar 13 00:38:25.124: INFO: Got endpoints: latency-svc-2k5j9 [623.134594ms] Mar 13 00:38:25.169: INFO: Created: latency-svc-qjms5 Mar 13 00:38:25.184: INFO: Got endpoints: latency-svc-qjms5 [647.026818ms] Mar 13 00:38:25.202: INFO: Created: latency-svc-s74wt Mar 13 00:38:25.208: INFO: Got endpoints: latency-svc-s74wt [624.104064ms] Mar 13 00:38:25.231: INFO: Created: latency-svc-5vn6l Mar 13 00:38:25.237: INFO: Got endpoints: latency-svc-5vn6l [586.608284ms] Mar 13 00:38:25.261: INFO: Created: latency-svc-zdb6k Mar 13 00:38:25.267: INFO: Got endpoints: latency-svc-zdb6k [605.479334ms] Mar 13 00:38:25.306: INFO: Created: latency-svc-vt7x6 Mar 13 00:38:25.309: INFO: Got endpoints: latency-svc-vt7x6 [611.081724ms] Mar 13 00:38:25.328: INFO: Created: latency-svc-dgtll Mar 13 00:38:25.342: INFO: Got endpoints: latency-svc-dgtll [620.343767ms] Mar 13 00:38:25.358: INFO: Created: latency-svc-qwqhm Mar 13 00:38:25.369: INFO: Got endpoints: latency-svc-qwqhm [580.609312ms] Mar 13 00:38:25.387: INFO: Created: latency-svc-4rbrd Mar 13 00:38:25.393: INFO: Got endpoints: latency-svc-4rbrd [581.026691ms] Mar 13 00:38:25.432: INFO: Created: latency-svc-ggn66 Mar 13 00:38:25.441: INFO: Got endpoints: latency-svc-ggn66 [599.949697ms] Mar 13 00:38:25.479: INFO: Created: latency-svc-bft6l Mar 13 00:38:25.495: INFO: Got endpoints: latency-svc-bft6l [629.704345ms] Mar 13 00:38:25.570: INFO: Created: latency-svc-ts4zj Mar 13 00:38:25.592: INFO: Got endpoints: latency-svc-ts4zj [665.099559ms] Mar 13 00:38:25.593: INFO: Created: latency-svc-flpt6 Mar 13 00:38:25.603: INFO: Got endpoints: latency-svc-flpt6 [658.611208ms] Mar 13 00:38:25.621: INFO: Created: latency-svc-6lksn Mar 13 00:38:25.634: INFO: Got endpoints: latency-svc-6lksn [602.810889ms] Mar 13 00:38:25.653: INFO: Created: latency-svc-pvptd Mar 13 00:38:25.657: INFO: Got endpoints: latency-svc-pvptd [605.232696ms] Mar 13 00:38:25.695: INFO: Created: latency-svc-tbdc9 Mar 13 00:38:25.718: INFO: Got endpoints: latency-svc-tbdc9 [594.488036ms] Mar 13 00:38:25.719: INFO: Created: latency-svc-9226t Mar 13 00:38:25.735: INFO: Got endpoints: latency-svc-9226t [551.535089ms] Mar 13 00:38:25.760: INFO: Created: latency-svc-5fr7b Mar 13 00:38:25.765: INFO: Got endpoints: latency-svc-5fr7b [557.259467ms] Mar 13 00:38:25.790: INFO: Created: latency-svc-5pfld Mar 13 00:38:25.795: INFO: Got endpoints: latency-svc-5pfld [557.268148ms] Mar 13 00:38:25.821: INFO: Created: latency-svc-88s4l Mar 13 00:38:25.844: INFO: Got endpoints: latency-svc-88s4l [577.220609ms] Mar 13 00:38:25.871: INFO: Created: latency-svc-7cf2f Mar 13 00:38:25.880: INFO: Got endpoints: latency-svc-7cf2f [571.077583ms] Mar 13 00:38:25.898: INFO: Created: latency-svc-pw85z Mar 13 00:38:25.947: INFO: Got endpoints: latency-svc-pw85z [605.215743ms] Mar 13 00:38:25.970: INFO: Created: latency-svc-6xhbn Mar 13 00:38:25.995: INFO: Got endpoints: latency-svc-6xhbn [625.673691ms] Mar 13 00:38:26.018: INFO: Created: latency-svc-lldsx Mar 13 00:38:26.028: INFO: Got endpoints: latency-svc-lldsx [635.168762ms] Mar 13 00:38:26.079: INFO: Created: latency-svc-9z6rd Mar 13 00:38:26.094: INFO: Got endpoints: latency-svc-9z6rd [653.184359ms] Mar 13 00:38:26.120: INFO: Created: latency-svc-hxmp9 Mar 13 00:38:26.143: INFO: Got endpoints: latency-svc-hxmp9 [648.204439ms] Mar 13 00:38:26.168: INFO: Created: latency-svc-vmlgn Mar 13 00:38:26.179: INFO: Got endpoints: latency-svc-vmlgn [586.796832ms] Mar 13 00:38:26.205: INFO: Created: latency-svc-fhzn2 Mar 13 00:38:26.215: INFO: Got endpoints: latency-svc-fhzn2 [612.435374ms] Mar 13 00:38:26.228: INFO: Created: latency-svc-bsj4j Mar 13 00:38:26.233: INFO: Got endpoints: latency-svc-bsj4j [599.318841ms] Mar 13 00:38:26.252: INFO: Created: latency-svc-lbc55 Mar 13 00:38:26.262: INFO: Got endpoints: latency-svc-lbc55 [605.041408ms] Mar 13 00:38:26.281: INFO: Created: latency-svc-rx4n8 Mar 13 00:38:26.292: INFO: Got endpoints: latency-svc-rx4n8 [574.146835ms] Mar 13 00:38:26.355: INFO: Created: latency-svc-gvx4r Mar 13 00:38:26.379: INFO: Created: latency-svc-c7j6l Mar 13 00:38:26.380: INFO: Got endpoints: latency-svc-gvx4r [644.252395ms] Mar 13 00:38:26.388: INFO: Got endpoints: latency-svc-c7j6l [622.859458ms] Mar 13 00:38:26.409: INFO: Created: latency-svc-qmz6p Mar 13 00:38:26.418: INFO: Got endpoints: latency-svc-qmz6p [623.401973ms] Mar 13 00:38:26.444: INFO: Created: latency-svc-7t9t2 Mar 13 00:38:26.525: INFO: Got endpoints: latency-svc-7t9t2 [680.99074ms] Mar 13 00:38:26.531: INFO: Created: latency-svc-hmbvp Mar 13 00:38:26.538: INFO: Got endpoints: latency-svc-hmbvp [658.099528ms] Mar 13 00:38:26.558: INFO: Created: latency-svc-v27q4 Mar 13 00:38:26.570: INFO: Got endpoints: latency-svc-v27q4 [623.089744ms] Mar 13 00:38:26.594: INFO: Created: latency-svc-fmghd Mar 13 00:38:26.611: INFO: Got endpoints: latency-svc-fmghd [616.408256ms] Mar 13 00:38:26.665: INFO: Created: latency-svc-66s9b Mar 13 00:38:26.682: INFO: Got endpoints: latency-svc-66s9b [653.99041ms] Mar 13 00:38:26.727: INFO: Created: latency-svc-749m2 Mar 13 00:38:26.736: INFO: Got endpoints: latency-svc-749m2 [641.679997ms] Mar 13 00:38:26.751: INFO: Created: latency-svc-dm8vq Mar 13 00:38:26.797: INFO: Got endpoints: latency-svc-dm8vq [653.701604ms] Mar 13 00:38:26.798: INFO: Created: latency-svc-897r8 Mar 13 00:38:26.807: INFO: Got endpoints: latency-svc-897r8 [628.815767ms] Mar 13 00:38:26.833: INFO: Created: latency-svc-nlmnk Mar 13 00:38:26.858: INFO: Got endpoints: latency-svc-nlmnk [642.996268ms] Mar 13 00:38:26.876: INFO: Created: latency-svc-nh7bg Mar 13 00:38:26.879: INFO: Got endpoints: latency-svc-nh7bg [646.206394ms] Mar 13 00:38:26.895: INFO: Created: latency-svc-hgfkx Mar 13 00:38:26.923: INFO: Got endpoints: latency-svc-hgfkx [660.40961ms] Mar 13 00:38:26.960: INFO: Created: latency-svc-bt8w7 Mar 13 00:38:26.969: INFO: Got endpoints: latency-svc-bt8w7 [676.547116ms] Mar 13 00:38:27.001: INFO: Created: latency-svc-mb58h Mar 13 00:38:27.054: INFO: Got endpoints: latency-svc-mb58h [674.676639ms] Mar 13 00:38:27.068: INFO: Created: latency-svc-fjhth Mar 13 00:38:27.077: INFO: Got endpoints: latency-svc-fjhth [688.654971ms] Mar 13 00:38:27.099: INFO: Created: latency-svc-t2ssf Mar 13 00:38:27.186: INFO: Got endpoints: latency-svc-t2ssf [768.040188ms] Mar 13 00:38:27.200: INFO: Created: latency-svc-p8fx5 Mar 13 00:38:27.228: INFO: Got endpoints: latency-svc-p8fx5 [702.397714ms] Mar 13 00:38:27.255: INFO: Created: latency-svc-xvbnd Mar 13 00:38:27.278: INFO: Got endpoints: latency-svc-xvbnd [740.467768ms] Mar 13 00:38:27.319: INFO: Created: latency-svc-lb7z2 Mar 13 00:38:27.329: INFO: Got endpoints: latency-svc-lb7z2 [758.392775ms] Mar 13 00:38:27.362: INFO: Created: latency-svc-twwk2 Mar 13 00:38:27.370: INFO: Got endpoints: latency-svc-twwk2 [759.225916ms] Mar 13 00:38:27.392: INFO: Created: latency-svc-j94qn Mar 13 00:38:27.416: INFO: Got endpoints: latency-svc-j94qn [733.247046ms] Mar 13 00:38:27.452: INFO: Created: latency-svc-kbshp Mar 13 00:38:27.461: INFO: Got endpoints: latency-svc-kbshp [725.120044ms] Mar 13 00:38:27.476: INFO: Created: latency-svc-7dsqp Mar 13 00:38:27.479: INFO: Got endpoints: latency-svc-7dsqp [681.656664ms] Mar 13 00:38:27.507: INFO: Created: latency-svc-q5thf Mar 13 00:38:27.523: INFO: Got endpoints: latency-svc-q5thf [716.019545ms] Mar 13 00:38:27.587: INFO: Created: latency-svc-rl6hb Mar 13 00:38:27.599: INFO: Got endpoints: latency-svc-rl6hb [740.505407ms] Mar 13 00:38:27.620: INFO: Created: latency-svc-t2lvg Mar 13 00:38:27.628: INFO: Got endpoints: latency-svc-t2lvg [749.143459ms] Mar 13 00:38:27.657: INFO: Created: latency-svc-85t9n Mar 13 00:38:27.664: INFO: Got endpoints: latency-svc-85t9n [741.415188ms] Mar 13 00:38:27.687: INFO: Created: latency-svc-bs2vm Mar 13 00:38:27.715: INFO: Got endpoints: latency-svc-bs2vm [745.929813ms] Mar 13 00:38:27.727: INFO: Created: latency-svc-l5zft Mar 13 00:38:27.736: INFO: Got endpoints: latency-svc-l5zft [681.282957ms] Mar 13 00:38:27.752: INFO: Created: latency-svc-c2csh Mar 13 00:38:27.770: INFO: Created: latency-svc-s6vv8 Mar 13 00:38:27.770: INFO: Got endpoints: latency-svc-c2csh [693.184865ms] Mar 13 00:38:27.772: INFO: Got endpoints: latency-svc-s6vv8 [585.401733ms] Mar 13 00:38:27.790: INFO: Created: latency-svc-hwnct Mar 13 00:38:27.796: INFO: Got endpoints: latency-svc-hwnct [568.091675ms] Mar 13 00:38:27.840: INFO: Created: latency-svc-kgpfp Mar 13 00:38:27.849: INFO: Got endpoints: latency-svc-kgpfp [570.399728ms] Mar 13 00:38:27.872: INFO: Created: latency-svc-b2hhm Mar 13 00:38:27.881: INFO: Got endpoints: latency-svc-b2hhm [551.964841ms] Mar 13 00:38:27.914: INFO: Created: latency-svc-fhtg6 Mar 13 00:38:27.922: INFO: Got endpoints: latency-svc-fhtg6 [551.612541ms] Mar 13 00:38:27.970: INFO: Created: latency-svc-wzqbb Mar 13 00:38:27.981: INFO: Got endpoints: latency-svc-wzqbb [565.503599ms] Mar 13 00:38:28.005: INFO: Created: latency-svc-96xh6 Mar 13 00:38:28.024: INFO: Got endpoints: latency-svc-96xh6 [562.398433ms] Mar 13 00:38:28.024: INFO: Created: latency-svc-pmzfq Mar 13 00:38:28.046: INFO: Got endpoints: latency-svc-pmzfq [567.27848ms] Mar 13 00:38:28.046: INFO: Created: latency-svc-jbzjz Mar 13 00:38:28.071: INFO: Got endpoints: latency-svc-jbzjz [547.098445ms] Mar 13 00:38:28.121: INFO: Created: latency-svc-2pjrs Mar 13 00:38:28.125: INFO: Got endpoints: latency-svc-2pjrs [526.464498ms] Mar 13 00:38:28.148: INFO: Created: latency-svc-24g2b Mar 13 00:38:28.155: INFO: Got endpoints: latency-svc-24g2b [526.93539ms] Mar 13 00:38:28.208: INFO: Created: latency-svc-k9gxh Mar 13 00:38:28.264: INFO: Got endpoints: latency-svc-k9gxh [599.872673ms] Mar 13 00:38:28.266: INFO: Created: latency-svc-tlsmm Mar 13 00:38:28.270: INFO: Got endpoints: latency-svc-tlsmm [554.723326ms] Mar 13 00:38:28.286: INFO: Created: latency-svc-x4l5r Mar 13 00:38:28.305: INFO: Got endpoints: latency-svc-x4l5r [569.49545ms] Mar 13 00:38:28.335: INFO: Created: latency-svc-qcw75 Mar 13 00:38:28.341: INFO: Got endpoints: latency-svc-qcw75 [571.585598ms] Mar 13 00:38:28.390: INFO: Created: latency-svc-ss8jc Mar 13 00:38:28.431: INFO: Got endpoints: latency-svc-ss8jc [659.412813ms] Mar 13 00:38:28.432: INFO: Created: latency-svc-j787k Mar 13 00:38:28.515: INFO: Got endpoints: latency-svc-j787k [719.356946ms] Mar 13 00:38:28.527: INFO: Created: latency-svc-zdb6h Mar 13 00:38:28.540: INFO: Got endpoints: latency-svc-zdb6h [690.572025ms] Mar 13 00:38:28.575: INFO: Created: latency-svc-gn667 Mar 13 00:38:28.582: INFO: Got endpoints: latency-svc-gn667 [701.095162ms] Mar 13 00:38:28.641: INFO: Created: latency-svc-m5bnt Mar 13 00:38:28.689: INFO: Got endpoints: latency-svc-m5bnt [766.705933ms] Mar 13 00:38:28.689: INFO: Created: latency-svc-xhj5m Mar 13 00:38:28.701: INFO: Got endpoints: latency-svc-xhj5m [720.106234ms] Mar 13 00:38:28.737: INFO: Created: latency-svc-ssd25 Mar 13 00:38:28.776: INFO: Got endpoints: latency-svc-ssd25 [752.07101ms] Mar 13 00:38:28.781: INFO: Created: latency-svc-bgv47 Mar 13 00:38:28.785: INFO: Got endpoints: latency-svc-bgv47 [738.96786ms] Mar 13 00:38:28.808: INFO: Created: latency-svc-87vvt Mar 13 00:38:28.828: INFO: Created: latency-svc-km2bx Mar 13 00:38:28.830: INFO: Got endpoints: latency-svc-87vvt [759.443749ms] Mar 13 00:38:28.831: INFO: Got endpoints: latency-svc-km2bx [705.861351ms] Mar 13 00:38:28.869: INFO: Created: latency-svc-95nzc Mar 13 00:38:28.899: INFO: Got endpoints: latency-svc-95nzc [743.450346ms] Mar 13 00:38:28.916: INFO: Created: latency-svc-8t49g Mar 13 00:38:28.923: INFO: Got endpoints: latency-svc-8t49g [659.18097ms] Mar 13 00:38:28.959: INFO: Created: latency-svc-t4mwh Mar 13 00:38:28.971: INFO: Got endpoints: latency-svc-t4mwh [701.083159ms] Mar 13 00:38:28.988: INFO: Created: latency-svc-zs8xr Mar 13 00:38:28.995: INFO: Got endpoints: latency-svc-zs8xr [689.636364ms] Mar 13 00:38:29.036: INFO: Created: latency-svc-j9xn9 Mar 13 00:38:29.042: INFO: Got endpoints: latency-svc-j9xn9 [700.939572ms] Mar 13 00:38:29.091: INFO: Created: latency-svc-k69wf Mar 13 00:38:29.103: INFO: Got endpoints: latency-svc-k69wf [671.942881ms] Mar 13 00:38:29.121: INFO: Created: latency-svc-np4h4 Mar 13 00:38:29.127: INFO: Got endpoints: latency-svc-np4h4 [611.922944ms] Mar 13 00:38:29.174: INFO: Created: latency-svc-8z6l2 Mar 13 00:38:29.187: INFO: Got endpoints: latency-svc-8z6l2 [647.368679ms] Mar 13 00:38:29.205: INFO: Created: latency-svc-qqhfj Mar 13 00:38:29.211: INFO: Got endpoints: latency-svc-qqhfj [628.933181ms] Mar 13 00:38:29.229: INFO: Created: latency-svc-qdfbf Mar 13 00:38:29.235: INFO: Got endpoints: latency-svc-qdfbf [545.92197ms] Mar 13 00:38:29.253: INFO: Created: latency-svc-hdmch Mar 13 00:38:29.259: INFO: Got endpoints: latency-svc-hdmch [557.236555ms] Mar 13 00:38:29.300: INFO: Created: latency-svc-9s89c Mar 13 00:38:29.319: INFO: Created: latency-svc-7487c Mar 13 00:38:29.320: INFO: Got endpoints: latency-svc-9s89c [543.801853ms] Mar 13 00:38:29.336: INFO: Got endpoints: latency-svc-7487c [551.264534ms] Mar 13 00:38:29.355: INFO: Created: latency-svc-zz4fn Mar 13 00:38:29.360: INFO: Got endpoints: latency-svc-zz4fn [529.874013ms] Mar 13 00:38:29.379: INFO: Created: latency-svc-wjqwg Mar 13 00:38:29.384: INFO: Got endpoints: latency-svc-wjqwg [552.902122ms] Mar 13 00:38:29.425: INFO: Created: latency-svc-xgdll Mar 13 00:38:29.445: INFO: Got endpoints: latency-svc-xgdll [546.509451ms] Mar 13 00:38:29.447: INFO: Created: latency-svc-sbqp4 Mar 13 00:38:29.456: INFO: Got endpoints: latency-svc-sbqp4 [532.899066ms] Mar 13 00:38:29.486: INFO: Created: latency-svc-ggxrg Mar 13 00:38:29.498: INFO: Got endpoints: latency-svc-ggxrg [527.489141ms] Mar 13 00:38:29.528: INFO: Created: latency-svc-sgfnb Mar 13 00:38:29.557: INFO: Got endpoints: latency-svc-sgfnb [562.317412ms] Mar 13 00:38:29.566: INFO: Created: latency-svc-mzld6 Mar 13 00:38:29.606: INFO: Got endpoints: latency-svc-mzld6 [563.809776ms] Mar 13 00:38:29.638: INFO: Created: latency-svc-mshgl Mar 13 00:38:29.642: INFO: Got endpoints: latency-svc-mshgl [539.017449ms] Mar 13 00:38:29.689: INFO: Created: latency-svc-hmzd4 Mar 13 00:38:29.714: INFO: Got endpoints: latency-svc-hmzd4 [587.063176ms] Mar 13 00:38:29.716: INFO: Created: latency-svc-c6dk6 Mar 13 00:38:29.726: INFO: Got endpoints: latency-svc-c6dk6 [539.366244ms] Mar 13 00:38:29.746: INFO: Created: latency-svc-nzl2s Mar 13 00:38:29.756: INFO: Got endpoints: latency-svc-nzl2s [545.424337ms] Mar 13 00:38:29.769: INFO: Created: latency-svc-cdpzw Mar 13 00:38:29.781: INFO: Got endpoints: latency-svc-cdpzw [546.655634ms] Mar 13 00:38:29.821: INFO: Created: latency-svc-xl87g Mar 13 00:38:29.827: INFO: Got endpoints: latency-svc-xl87g [568.751057ms] Mar 13 00:38:29.847: INFO: Created: latency-svc-2fgjh Mar 13 00:38:29.851: INFO: Got endpoints: latency-svc-2fgjh [531.71761ms] Mar 13 00:38:29.866: INFO: Created: latency-svc-86v4m Mar 13 00:38:29.890: INFO: Got endpoints: latency-svc-86v4m [553.430802ms] Mar 13 00:38:29.914: INFO: Created: latency-svc-ms86m Mar 13 00:38:29.947: INFO: Got endpoints: latency-svc-ms86m [587.021831ms] Mar 13 00:38:29.967: INFO: Created: latency-svc-psjfc Mar 13 00:38:30.223: INFO: Got endpoints: latency-svc-psjfc [838.570528ms] Mar 13 00:38:30.255: INFO: Created: latency-svc-x55qw Mar 13 00:38:30.522: INFO: Got endpoints: latency-svc-x55qw [1.076634656s] Mar 13 00:38:30.549: INFO: Created: latency-svc-przxc Mar 13 00:38:30.559: INFO: Got endpoints: latency-svc-przxc [1.102880522s] Mar 13 00:38:30.579: INFO: Created: latency-svc-9dd9n Mar 13 00:38:30.589: INFO: Got endpoints: latency-svc-9dd9n [1.090771597s] Mar 13 00:38:30.671: INFO: Created: latency-svc-dn5pl Mar 13 00:38:30.700: INFO: Created: latency-svc-jwjwn Mar 13 00:38:30.700: INFO: Got endpoints: latency-svc-dn5pl [1.142554233s] Mar 13 00:38:30.709: INFO: Got endpoints: latency-svc-jwjwn [1.102284954s] Mar 13 00:38:30.730: INFO: Created: latency-svc-q58sq Mar 13 00:38:30.733: INFO: Got endpoints: latency-svc-q58sq [1.090757019s] Mar 13 00:38:30.809: INFO: Created: latency-svc-phlz7 Mar 13 00:38:30.837: INFO: Created: latency-svc-644nm Mar 13 00:38:30.837: INFO: Got endpoints: latency-svc-phlz7 [1.122475261s] Mar 13 00:38:30.846: INFO: Got endpoints: latency-svc-644nm [1.119916168s] Mar 13 00:38:30.862: INFO: Created: latency-svc-llr27 Mar 13 00:38:30.870: INFO: Got endpoints: latency-svc-llr27 [1.114046683s] Mar 13 00:38:30.947: INFO: Created: latency-svc-dfntk Mar 13 00:38:30.974: INFO: Got endpoints: latency-svc-dfntk [1.192598742s] Mar 13 00:38:30.974: INFO: Created: latency-svc-9djsk Mar 13 00:38:30.978: INFO: Got endpoints: latency-svc-9djsk [1.150309704s] Mar 13 00:38:31.029: INFO: Created: latency-svc-pbcfp Mar 13 00:38:31.066: INFO: Got endpoints: latency-svc-pbcfp [1.215061991s] Mar 13 00:38:31.100: INFO: Created: latency-svc-wd7s7 Mar 13 00:38:31.110: INFO: Got endpoints: latency-svc-wd7s7 [1.219923846s] Mar 13 00:38:31.131: INFO: Created: latency-svc-vs2j8 Mar 13 00:38:31.133: INFO: Got endpoints: latency-svc-vs2j8 [1.186171358s] Mar 13 00:38:31.155: INFO: Created: latency-svc-2hf6l Mar 13 00:38:31.204: INFO: Got endpoints: latency-svc-2hf6l [981.694088ms] Mar 13 00:38:31.215: INFO: Created: latency-svc-bkgjg Mar 13 00:38:31.224: INFO: Got endpoints: latency-svc-bkgjg [701.407902ms] Mar 13 00:38:31.245: INFO: Created: latency-svc-fzr79 Mar 13 00:38:31.255: INFO: Got endpoints: latency-svc-fzr79 [695.479806ms] Mar 13 00:38:31.275: INFO: Created: latency-svc-ncplw Mar 13 00:38:31.284: INFO: Got endpoints: latency-svc-ncplw [694.65128ms] Mar 13 00:38:31.336: INFO: Created: latency-svc-bjwg9 Mar 13 00:38:31.360: INFO: Created: latency-svc-rdg4j Mar 13 00:38:31.360: INFO: Got endpoints: latency-svc-bjwg9 [660.370749ms] Mar 13 00:38:31.367: INFO: Got endpoints: latency-svc-rdg4j [658.823532ms] Mar 13 00:38:31.385: INFO: Created: latency-svc-bvq5w Mar 13 00:38:31.386: INFO: Got endpoints: latency-svc-bvq5w [652.852735ms] Mar 13 00:38:31.407: INFO: Created: latency-svc-wl92d Mar 13 00:38:31.416: INFO: Got endpoints: latency-svc-wl92d [578.994101ms] Mar 13 00:38:31.480: INFO: Created: latency-svc-hvj5n Mar 13 00:38:31.510: INFO: Got endpoints: latency-svc-hvj5n [663.68683ms] Mar 13 00:38:31.510: INFO: Created: latency-svc-99775 Mar 13 00:38:31.517: INFO: Got endpoints: latency-svc-99775 [646.795301ms] Mar 13 00:38:31.540: INFO: Created: latency-svc-qjjww Mar 13 00:38:31.547: INFO: Got endpoints: latency-svc-qjjww [573.175606ms] Mar 13 00:38:31.599: INFO: Created: latency-svc-wb5s2 Mar 13 00:38:31.611: INFO: Got endpoints: latency-svc-wb5s2 [633.088076ms] Mar 13 00:38:31.623: INFO: Created: latency-svc-45kbk Mar 13 00:38:31.625: INFO: Got endpoints: latency-svc-45kbk [558.409511ms] Mar 13 00:38:31.652: INFO: Created: latency-svc-kljzc Mar 13 00:38:31.655: INFO: Got endpoints: latency-svc-kljzc [545.617253ms] Mar 13 00:38:31.678: INFO: Created: latency-svc-nrpgb Mar 13 00:38:31.685: INFO: Got endpoints: latency-svc-nrpgb [552.004529ms] Mar 13 00:38:31.726: INFO: Created: latency-svc-crcx6 Mar 13 00:38:31.749: INFO: Got endpoints: latency-svc-crcx6 [544.845344ms] Mar 13 00:38:31.750: INFO: Created: latency-svc-w27ph Mar 13 00:38:31.757: INFO: Got endpoints: latency-svc-w27ph [533.685547ms] Mar 13 00:38:31.779: INFO: Created: latency-svc-7vqg9 Mar 13 00:38:31.787: INFO: Got endpoints: latency-svc-7vqg9 [532.469692ms] Mar 13 00:38:31.809: INFO: Created: latency-svc-k6g47 Mar 13 00:38:31.817: INFO: Got endpoints: latency-svc-k6g47 [533.782027ms] Mar 13 00:38:31.857: INFO: Created: latency-svc-pv66r Mar 13 00:38:31.882: INFO: Got endpoints: latency-svc-pv66r [521.199692ms] Mar 13 00:38:31.882: INFO: Created: latency-svc-zlbg6 Mar 13 00:38:31.906: INFO: Got endpoints: latency-svc-zlbg6 [538.577774ms] Mar 13 00:38:31.906: INFO: Created: latency-svc-wjmtl Mar 13 00:38:31.913: INFO: Got endpoints: latency-svc-wjmtl [527.198952ms] Mar 13 00:38:31.953: INFO: Created: latency-svc-hfln7 Mar 13 00:38:31.983: INFO: Created: latency-svc-6zj6s Mar 13 00:38:31.984: INFO: Got endpoints: latency-svc-hfln7 [567.594155ms] Mar 13 00:38:32.009: INFO: Got endpoints: latency-svc-6zj6s [498.660648ms] Mar 13 00:38:32.009: INFO: Created: latency-svc-mztl2 Mar 13 00:38:32.032: INFO: Got endpoints: latency-svc-mztl2 [514.585656ms] Mar 13 00:38:32.056: INFO: Created: latency-svc-djkfk Mar 13 00:38:32.063: INFO: Got endpoints: latency-svc-djkfk [515.58746ms] Mar 13 00:38:32.102: INFO: Created: latency-svc-6nxgz Mar 13 00:38:32.111: INFO: Got endpoints: latency-svc-6nxgz [499.861512ms] Mar 13 00:38:32.133: INFO: Created: latency-svc-qsw2w Mar 13 00:38:32.135: INFO: Got endpoints: latency-svc-qsw2w [509.658848ms] Mar 13 00:38:32.157: INFO: Created: latency-svc-ldfdx Mar 13 00:38:32.171: INFO: Got endpoints: latency-svc-ldfdx [515.54956ms] Mar 13 00:38:32.217: INFO: Created: latency-svc-fwr6l Mar 13 00:38:32.225: INFO: Got endpoints: latency-svc-fwr6l [539.607308ms] Mar 13 00:38:32.247: INFO: Created: latency-svc-q68lv Mar 13 00:38:32.255: INFO: Got endpoints: latency-svc-q68lv [505.490094ms] Mar 13 00:38:32.255: INFO: Latencies: [17.705253ms 75.339147ms 81.506983ms 111.141687ms 142.468909ms 199.725966ms 208.964633ms 243.864651ms 315.662513ms 339.855046ms 399.429444ms 447.272406ms 477.894767ms 498.660648ms 499.861512ms 505.490094ms 507.882998ms 509.658848ms 514.585656ms 515.54956ms 515.58746ms 521.199692ms 526.464498ms 526.93539ms 527.198952ms 527.489141ms 529.874013ms 531.71761ms 532.469692ms 532.899066ms 533.685547ms 533.782027ms 538.577774ms 539.017449ms 539.366244ms 539.607308ms 543.801853ms 544.845344ms 545.424337ms 545.617253ms 545.92197ms 546.509451ms 546.655634ms 547.098445ms 551.264534ms 551.535089ms 551.612541ms 551.964841ms 552.004529ms 552.902122ms 553.430802ms 554.723326ms 557.236555ms 557.259467ms 557.268148ms 558.409511ms 562.317412ms 562.398433ms 563.809776ms 564.323249ms 565.503599ms 567.27848ms 567.594155ms 568.091675ms 568.751057ms 569.49545ms 570.399728ms 571.077583ms 571.585598ms 573.175606ms 574.146835ms 575.799458ms 577.220609ms 578.994101ms 580.609312ms 581.026691ms 582.158564ms 585.401733ms 586.608284ms 586.796832ms 587.021831ms 587.063176ms 594.488036ms 594.628928ms 597.794311ms 599.204422ms 599.318841ms 599.872673ms 599.949697ms 602.810889ms 605.041408ms 605.215743ms 605.232696ms 605.479334ms 611.081724ms 611.922944ms 612.435374ms 616.408256ms 617.94625ms 620.343767ms 622.859458ms 623.089744ms 623.134594ms 623.401973ms 624.104064ms 625.673691ms 628.815767ms 628.933181ms 629.704345ms 632.674739ms 633.088076ms 634.974776ms 635.168762ms 635.466972ms 641.679997ms 642.996268ms 644.252395ms 646.206394ms 646.795301ms 647.026818ms 647.368679ms 648.204439ms 648.276216ms 648.463506ms 652.852735ms 653.184359ms 653.701604ms 653.99041ms 658.099528ms 658.611208ms 658.823532ms 659.18097ms 659.412813ms 660.370749ms 660.40961ms 663.327409ms 663.68683ms 664.894899ms 665.099559ms 669.691554ms 671.102464ms 671.87527ms 671.942881ms 673.851169ms 674.676639ms 675.893125ms 676.547116ms 676.997015ms 680.99074ms 681.282957ms 681.656664ms 688.654971ms 689.636364ms 690.572025ms 693.184865ms 694.65128ms 694.97101ms 695.479806ms 700.939572ms 701.083159ms 701.095162ms 701.2575ms 701.407902ms 702.397714ms 705.861351ms 707.108334ms 716.019545ms 719.356946ms 720.106234ms 725.120044ms 733.247046ms 738.96786ms 740.467768ms 740.505407ms 741.415188ms 743.450346ms 745.929813ms 749.143459ms 752.07101ms 758.392775ms 759.225916ms 759.443749ms 766.705933ms 768.040188ms 838.570528ms 981.694088ms 1.076634656s 1.090757019s 1.090771597s 1.102284954s 1.102880522s 1.114046683s 1.119916168s 1.122475261s 1.142554233s 1.150309704s 1.186171358s 1.192598742s 1.215061991s 1.219923846s] Mar 13 00:38:32.255: INFO: 50 %ile: 622.859458ms Mar 13 00:38:32.255: INFO: 90 %ile: 759.225916ms Mar 13 00:38:32.255: INFO: 99 %ile: 1.215061991s Mar 13 00:38:32.255: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:38:32.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-361" for this suite. • [SLOW TEST:10.996 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":233,"skipped":3946,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:38:32.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 13 00:38:32.309: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49edb2ab-8382-4c9e-81e5-b75d439c77d3" in namespace "downward-api-4232" to be "Succeeded or Failed" Mar 13 00:38:32.314: INFO: Pod "downwardapi-volume-49edb2ab-8382-4c9e-81e5-b75d439c77d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.873732ms Mar 13 00:38:34.317: INFO: Pod "downwardapi-volume-49edb2ab-8382-4c9e-81e5-b75d439c77d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007748796s STEP: Saw pod success Mar 13 00:38:34.317: INFO: Pod "downwardapi-volume-49edb2ab-8382-4c9e-81e5-b75d439c77d3" satisfied condition "Succeeded or Failed" Mar 13 00:38:34.319: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-49edb2ab-8382-4c9e-81e5-b75d439c77d3 container client-container: STEP: delete the pod Mar 13 00:38:34.362: INFO: Waiting for pod downwardapi-volume-49edb2ab-8382-4c9e-81e5-b75d439c77d3 to disappear Mar 13 00:38:34.369: INFO: Pod downwardapi-volume-49edb2ab-8382-4c9e-81e5-b75d439c77d3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:38:34.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4232" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3951,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:38:34.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:38:37.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8945" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":235,"skipped":3973,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:38:37.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-cn4p STEP: Creating a pod to test atomic-volume-subpath Mar 13 00:38:37.614: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cn4p" in namespace "subpath-6012" to be "Succeeded or Failed" Mar 13 00:38:37.638: INFO: Pod "pod-subpath-test-configmap-cn4p": Phase="Pending", Reason="", readiness=false. Elapsed: 23.822777ms Mar 13 00:38:39.665: INFO: Pod "pod-subpath-test-configmap-cn4p": Phase="Running", Reason="", readiness=true. Elapsed: 2.05113252s Mar 13 00:38:41.693: INFO: Pod "pod-subpath-test-configmap-cn4p": Phase="Running", Reason="", readiness=true. Elapsed: 4.078826273s Mar 13 00:38:43.732: INFO: Pod "pod-subpath-test-configmap-cn4p": Phase="Running", Reason="", readiness=true. Elapsed: 6.117619232s Mar 13 00:38:45.742: INFO: Pod "pod-subpath-test-configmap-cn4p": Phase="Running", Reason="", readiness=true. Elapsed: 8.127351975s Mar 13 00:38:47.745: INFO: Pod "pod-subpath-test-configmap-cn4p": Phase="Running", Reason="", readiness=true. Elapsed: 10.131141878s Mar 13 00:38:49.748: INFO: Pod "pod-subpath-test-configmap-cn4p": Phase="Running", Reason="", readiness=true. Elapsed: 12.133429807s Mar 13 00:38:51.756: INFO: Pod "pod-subpath-test-configmap-cn4p": Phase="Running", Reason="", readiness=true. Elapsed: 14.141361465s Mar 13 00:38:53.759: INFO: Pod "pod-subpath-test-configmap-cn4p": Phase="Running", Reason="", readiness=true. Elapsed: 16.144845524s Mar 13 00:38:55.763: INFO: Pod "pod-subpath-test-configmap-cn4p": Phase="Running", Reason="", readiness=true. Elapsed: 18.148444659s Mar 13 00:38:57.765: INFO: Pod "pod-subpath-test-configmap-cn4p": Phase="Running", Reason="", readiness=true. Elapsed: 20.151127647s Mar 13 00:38:59.768: INFO: Pod "pod-subpath-test-configmap-cn4p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.153779087s STEP: Saw pod success Mar 13 00:38:59.768: INFO: Pod "pod-subpath-test-configmap-cn4p" satisfied condition "Succeeded or Failed" Mar 13 00:38:59.769: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-cn4p container test-container-subpath-configmap-cn4p: STEP: delete the pod Mar 13 00:38:59.789: INFO: Waiting for pod pod-subpath-test-configmap-cn4p to disappear Mar 13 00:38:59.801: INFO: Pod pod-subpath-test-configmap-cn4p no longer exists STEP: Deleting pod pod-subpath-test-configmap-cn4p Mar 13 00:38:59.801: INFO: Deleting pod "pod-subpath-test-configmap-cn4p" in namespace "subpath-6012" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:38:59.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6012" for this suite. • [SLOW TEST:22.298 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":236,"skipped":3978,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:38:59.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-9412 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 13 00:38:59.868: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 13 00:38:59.897: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 13 00:39:01.900: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:39:03.901: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:39:05.901: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:39:07.900: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:39:09.900: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:39:11.901: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:39:13.946: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:39:15.901: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 13 00:39:15.906: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 13 00:39:17.924: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.75:8080/dial?request=hostname&protocol=udp&host=10.244.1.74&port=8081&tries=1'] Namespace:pod-network-test-9412 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 00:39:17.924: INFO: >>> kubeConfig: /root/.kube/config I0313 00:39:17.959162 7 log.go:172] (0xc001d67810) (0xc001c03b80) Create stream I0313 00:39:17.959198 7 log.go:172] (0xc001d67810) (0xc001c03b80) Stream added, broadcasting: 1 I0313 00:39:17.961367 7 log.go:172] (0xc001d67810) Reply frame received for 1 I0313 00:39:17.961403 7 log.go:172] (0xc001d67810) (0xc001c03c20) Create stream I0313 00:39:17.961417 7 log.go:172] (0xc001d67810) (0xc001c03c20) Stream added, broadcasting: 3 I0313 00:39:17.962558 7 log.go:172] (0xc001d67810) Reply frame received for 3 I0313 00:39:17.962595 7 log.go:172] (0xc001d67810) (0xc001d8c5a0) Create stream I0313 00:39:17.962608 7 log.go:172] (0xc001d67810) (0xc001d8c5a0) Stream added, broadcasting: 5 I0313 00:39:17.963574 7 log.go:172] (0xc001d67810) Reply frame received for 5 I0313 00:39:18.025962 7 log.go:172] (0xc001d67810) Data frame received for 3 I0313 00:39:18.025998 7 log.go:172] (0xc001c03c20) (3) Data frame handling I0313 00:39:18.026022 7 log.go:172] (0xc001c03c20) (3) Data frame sent I0313 00:39:18.026594 7 log.go:172] (0xc001d67810) Data frame received for 3 I0313 00:39:18.026629 7 log.go:172] (0xc001c03c20) (3) Data frame handling I0313 00:39:18.026660 7 log.go:172] (0xc001d67810) Data frame received for 5 I0313 00:39:18.026684 7 log.go:172] (0xc001d8c5a0) (5) Data frame handling I0313 00:39:18.028111 7 log.go:172] (0xc001d67810) Data frame received for 1 I0313 00:39:18.028131 7 log.go:172] (0xc001c03b80) (1) Data frame handling I0313 00:39:18.028146 7 log.go:172] (0xc001c03b80) (1) Data frame sent I0313 00:39:18.028164 7 log.go:172] (0xc001d67810) (0xc001c03b80) Stream removed, broadcasting: 1 I0313 00:39:18.028180 7 log.go:172] (0xc001d67810) Go away received I0313 00:39:18.028344 7 log.go:172] (0xc001d67810) (0xc001c03b80) Stream removed, broadcasting: 1 I0313 00:39:18.028363 7 log.go:172] (0xc001d67810) (0xc001c03c20) Stream removed, broadcasting: 3 I0313 00:39:18.028371 7 log.go:172] (0xc001d67810) (0xc001d8c5a0) Stream removed, broadcasting: 5 Mar 13 00:39:18.028: INFO: Waiting for responses: map[] Mar 13 00:39:18.031: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.75:8080/dial?request=hostname&protocol=udp&host=10.244.2.54&port=8081&tries=1'] Namespace:pod-network-test-9412 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 00:39:18.031: INFO: >>> kubeConfig: /root/.kube/config I0313 00:39:18.060281 7 log.go:172] (0xc001d61970) (0xc000d40820) Create stream I0313 00:39:18.060308 7 log.go:172] (0xc001d61970) (0xc000d40820) Stream added, broadcasting: 1 I0313 00:39:18.062536 7 log.go:172] (0xc001d61970) Reply frame received for 1 I0313 00:39:18.062593 7 log.go:172] (0xc001d61970) (0xc000d408c0) Create stream I0313 00:39:18.062607 7 log.go:172] (0xc001d61970) (0xc000d408c0) Stream added, broadcasting: 3 I0313 00:39:18.063429 7 log.go:172] (0xc001d61970) Reply frame received for 3 I0313 00:39:18.063458 7 log.go:172] (0xc001d61970) (0xc000d40aa0) Create stream I0313 00:39:18.063472 7 log.go:172] (0xc001d61970) (0xc000d40aa0) Stream added, broadcasting: 5 I0313 00:39:18.064350 7 log.go:172] (0xc001d61970) Reply frame received for 5 I0313 00:39:18.138816 7 log.go:172] (0xc001d61970) Data frame received for 3 I0313 00:39:18.138875 7 log.go:172] (0xc000d408c0) (3) Data frame handling I0313 00:39:18.138924 7 log.go:172] (0xc000d408c0) (3) Data frame sent I0313 00:39:18.139097 7 log.go:172] (0xc001d61970) Data frame received for 3 I0313 00:39:18.139110 7 log.go:172] (0xc000d408c0) (3) Data frame handling I0313 00:39:18.139172 7 log.go:172] (0xc001d61970) Data frame received for 5 I0313 00:39:18.139207 7 log.go:172] (0xc000d40aa0) (5) Data frame handling I0313 00:39:18.140576 7 log.go:172] (0xc001d61970) Data frame received for 1 I0313 00:39:18.140590 7 log.go:172] (0xc000d40820) (1) Data frame handling I0313 00:39:18.140597 7 log.go:172] (0xc000d40820) (1) Data frame sent I0313 00:39:18.140612 7 log.go:172] (0xc001d61970) (0xc000d40820) Stream removed, broadcasting: 1 I0313 00:39:18.140638 7 log.go:172] (0xc001d61970) Go away received I0313 00:39:18.140699 7 log.go:172] (0xc001d61970) (0xc000d40820) Stream removed, broadcasting: 1 I0313 00:39:18.140721 7 log.go:172] (0xc001d61970) (0xc000d408c0) Stream removed, broadcasting: 3 I0313 00:39:18.140735 7 log.go:172] (0xc001d61970) (0xc000d40aa0) Stream removed, broadcasting: 5 Mar 13 00:39:18.140: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:39:18.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9412" for this suite. • [SLOW TEST:18.335 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":3995,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:39:18.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:39:34.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6539" for this suite. • [SLOW TEST:16.272 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":238,"skipped":3995,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:39:34.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-8c1768d2-054d-4865-a7e4-e34bd81fd65e STEP: Creating a pod to test consume configMaps Mar 13 00:39:34.486: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bb9aa274-d52a-4c05-8485-1089217e8227" in namespace "projected-3776" to be "Succeeded or Failed" Mar 13 00:39:34.492: INFO: Pod "pod-projected-configmaps-bb9aa274-d52a-4c05-8485-1089217e8227": Phase="Pending", Reason="", readiness=false. Elapsed: 5.534763ms Mar 13 00:39:36.496: INFO: Pod "pod-projected-configmaps-bb9aa274-d52a-4c05-8485-1089217e8227": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009361213s STEP: Saw pod success Mar 13 00:39:36.496: INFO: Pod "pod-projected-configmaps-bb9aa274-d52a-4c05-8485-1089217e8227" satisfied condition "Succeeded or Failed" Mar 13 00:39:36.499: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-bb9aa274-d52a-4c05-8485-1089217e8227 container projected-configmap-volume-test: STEP: delete the pod Mar 13 00:39:36.569: INFO: Waiting for pod pod-projected-configmaps-bb9aa274-d52a-4c05-8485-1089217e8227 to disappear Mar 13 00:39:36.575: INFO: Pod pod-projected-configmaps-bb9aa274-d52a-4c05-8485-1089217e8227 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:39:36.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3776" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4014,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:39:36.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 13 00:39:36.636: INFO: Waiting up to 5m0s for pod "pod-d03e14fc-eb7a-4f64-86d3-afef529e6794" in namespace "emptydir-5118" to be "Succeeded or Failed" Mar 13 00:39:36.642: INFO: Pod "pod-d03e14fc-eb7a-4f64-86d3-afef529e6794": Phase="Pending", Reason="", readiness=false. Elapsed: 5.417768ms Mar 13 00:39:38.645: INFO: Pod "pod-d03e14fc-eb7a-4f64-86d3-afef529e6794": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008321466s STEP: Saw pod success Mar 13 00:39:38.645: INFO: Pod "pod-d03e14fc-eb7a-4f64-86d3-afef529e6794" satisfied condition "Succeeded or Failed" Mar 13 00:39:38.647: INFO: Trying to get logs from node latest-worker pod pod-d03e14fc-eb7a-4f64-86d3-afef529e6794 container test-container: STEP: delete the pod Mar 13 00:39:38.666: INFO: Waiting for pod pod-d03e14fc-eb7a-4f64-86d3-afef529e6794 to disappear Mar 13 00:39:38.670: INFO: Pod pod-d03e14fc-eb7a-4f64-86d3-afef529e6794 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:39:38.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5118" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4032,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:39:38.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 13 00:39:39.790: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 13 00:39:41.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656779, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656779, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656779, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656779, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 13 00:39:44.852: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 13 00:39:49.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config attach --namespace=webhook-7221 to-be-attached-pod -i -c=container1' Mar 13 00:39:50.895: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:39:50.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7221" for this suite. STEP: Destroying namespace "webhook-7221-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.337 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":241,"skipped":4033,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:39:51.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:39:51.084: INFO: Create a RollingUpdate DaemonSet Mar 13 00:39:51.087: INFO: Check that daemon pods launch on every node of the cluster Mar 13 00:39:51.090: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:39:51.112: INFO: Number of nodes with available pods: 0 Mar 13 00:39:51.112: INFO: Node latest-worker is running more than one daemon pod Mar 13 00:39:52.117: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:39:52.119: INFO: Number of nodes with available pods: 0 Mar 13 00:39:52.120: INFO: Node latest-worker is running more than one daemon pod Mar 13 00:39:53.126: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:39:53.129: INFO: Number of nodes with available pods: 1 Mar 13 00:39:53.129: INFO: Node latest-worker2 is running more than one daemon pod Mar 13 00:39:54.116: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:39:54.119: INFO: Number of nodes with available pods: 2 Mar 13 00:39:54.119: INFO: Number of running nodes: 2, number of available pods: 2 Mar 13 00:39:54.119: INFO: Update the DaemonSet to trigger a rollout Mar 13 00:39:54.125: INFO: Updating DaemonSet daemon-set Mar 13 00:39:57.158: INFO: Roll back the DaemonSet before rollout is complete Mar 13 00:39:57.174: INFO: Updating DaemonSet daemon-set Mar 13 00:39:57.174: INFO: Make sure DaemonSet rollback is complete Mar 13 00:39:57.180: INFO: Wrong image for pod: daemon-set-4xcvg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 13 00:39:57.180: INFO: Pod daemon-set-4xcvg is not available Mar 13 00:39:57.186: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:39:58.191: INFO: Wrong image for pod: daemon-set-4xcvg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 13 00:39:58.191: INFO: Pod daemon-set-4xcvg is not available Mar 13 00:39:58.194: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 13 00:39:59.205: INFO: Pod daemon-set-zl2nh is not available Mar 13 00:39:59.236: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2246, will wait for the garbage collector to delete the pods Mar 13 00:39:59.301: INFO: Deleting DaemonSet.extensions daemon-set took: 6.129266ms Mar 13 00:39:59.601: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.232511ms Mar 13 00:40:12.203: INFO: Number of nodes with available pods: 0 Mar 13 00:40:12.203: INFO: Number of running nodes: 0, number of available pods: 0 Mar 13 00:40:12.204: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2246/daemonsets","resourceVersion":"1230646"},"items":null} Mar 13 00:40:12.206: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2246/pods","resourceVersion":"1230646"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:40:12.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2246" for this suite. • [SLOW TEST:21.202 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":242,"skipped":4057,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:40:12.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:40:12.320: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:40:18.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2588" for this suite. • [SLOW TEST:6.503 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":243,"skipped":4141,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:40:18.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 13 00:40:18.786: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 13 00:40:18.795: INFO: Waiting for terminating namespaces to be deleted... Mar 13 00:40:18.796: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 13 00:40:18.799: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 13 00:40:18.799: INFO: Container kube-proxy ready: true, restart count 0 Mar 13 00:40:18.799: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 13 00:40:18.799: INFO: Container kindnet-cni ready: true, restart count 0 Mar 13 00:40:18.799: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 13 00:40:18.802: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 13 00:40:18.802: INFO: Container coredns ready: true, restart count 0 Mar 13 00:40:18.802: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 13 00:40:18.802: INFO: Container kindnet-cni ready: true, restart count 0 Mar 13 00:40:18.802: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 13 00:40:18.802: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Mar 13 00:40:18.878: INFO: Pod coredns-6955765f44-cgshp requesting resource cpu=100m on Node latest-worker2 Mar 13 00:40:18.878: INFO: Pod kindnet-2j5xm requesting resource cpu=100m on Node latest-worker Mar 13 00:40:18.878: INFO: Pod kindnet-spz5f requesting resource cpu=100m on Node latest-worker2 Mar 13 00:40:18.878: INFO: Pod kube-proxy-9jc24 requesting resource cpu=0m on Node latest-worker Mar 13 00:40:18.878: INFO: Pod kube-proxy-cx5xz requesting resource cpu=0m on Node latest-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 13 00:40:18.878: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Mar 13 00:40:18.883: INFO: Creating a pod which consumes cpu=11060m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-65955618-773a-4d5e-a2cd-6d7439665a03.15fbb631c910b4c5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4066/filler-pod-65955618-773a-4d5e-a2cd-6d7439665a03 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-65955618-773a-4d5e-a2cd-6d7439665a03.15fbb631fae3167c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-65955618-773a-4d5e-a2cd-6d7439665a03.15fbb6320b2b7649], Reason = [Created], Message = [Created container filler-pod-65955618-773a-4d5e-a2cd-6d7439665a03] STEP: Considering event: Type = [Normal], Name = [filler-pod-65955618-773a-4d5e-a2cd-6d7439665a03.15fbb63219260bb7], Reason = [Started], Message = [Started container filler-pod-65955618-773a-4d5e-a2cd-6d7439665a03] STEP: Considering event: Type = [Normal], Name = [filler-pod-72fa03cd-015a-4c65-ac64-89474712dce2.15fbb631cabc6c72], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4066/filler-pod-72fa03cd-015a-4c65-ac64-89474712dce2 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-72fa03cd-015a-4c65-ac64-89474712dce2.15fbb631fba51ac1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-72fa03cd-015a-4c65-ac64-89474712dce2.15fbb6320b2a9bbc], Reason = [Created], Message = [Created container filler-pod-72fa03cd-015a-4c65-ac64-89474712dce2] STEP: Considering event: Type = [Normal], Name = [filler-pod-72fa03cd-015a-4c65-ac64-89474712dce2.15fbb63217011995], Reason = [Started], Message = [Started container filler-pod-72fa03cd-015a-4c65-ac64-89474712dce2] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fbb632ba0b708c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:40:24.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4066" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:5.330 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":244,"skipped":4149,"failed":0} [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:40:24.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:40:24.429: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 13 00:40:26.465: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:40:27.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1033" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":245,"skipped":4149,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:40:27.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 13 00:40:27.597: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 13 00:40:27.620: INFO: Waiting for terminating namespaces to be deleted... Mar 13 00:40:27.622: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 13 00:40:27.627: INFO: condition-test-b9vxf from replication-controller-1033 started at 2020-03-13 00:40:25 +0000 UTC (1 container statuses recorded) Mar 13 00:40:27.627: INFO: Container httpd ready: true, restart count 0 Mar 13 00:40:27.627: INFO: kube-proxy-9jc24 from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 13 00:40:27.627: INFO: Container kube-proxy ready: true, restart count 0 Mar 13 00:40:27.627: INFO: kindnet-2j5xm from kube-system started at 2020-03-08 14:49:42 +0000 UTC (1 container statuses recorded) Mar 13 00:40:27.627: INFO: Container kindnet-cni ready: true, restart count 0 Mar 13 00:40:27.627: INFO: filler-pod-65955618-773a-4d5e-a2cd-6d7439665a03 from sched-pred-4066 started at 2020-03-13 00:40:18 +0000 UTC (1 container statuses recorded) Mar 13 00:40:27.627: INFO: Container filler-pod-65955618-773a-4d5e-a2cd-6d7439665a03 ready: true, restart count 0 Mar 13 00:40:27.627: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 13 00:40:27.632: INFO: filler-pod-72fa03cd-015a-4c65-ac64-89474712dce2 from sched-pred-4066 started at 2020-03-13 00:40:18 +0000 UTC (1 container statuses recorded) Mar 13 00:40:27.632: INFO: Container filler-pod-72fa03cd-015a-4c65-ac64-89474712dce2 ready: true, restart count 0 Mar 13 00:40:27.632: INFO: kindnet-spz5f from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 13 00:40:27.632: INFO: Container kindnet-cni ready: true, restart count 0 Mar 13 00:40:27.632: INFO: coredns-6955765f44-cgshp from kube-system started at 2020-03-08 14:50:16 +0000 UTC (1 container statuses recorded) Mar 13 00:40:27.632: INFO: Container coredns ready: true, restart count 0 Mar 13 00:40:27.632: INFO: condition-test-rcrv9 from replication-controller-1033 started at 2020-03-13 00:40:25 +0000 UTC (1 container statuses recorded) Mar 13 00:40:27.632: INFO: Container httpd ready: true, restart count 0 Mar 13 00:40:27.632: INFO: kube-proxy-cx5xz from kube-system started at 2020-03-08 14:49:56 +0000 UTC (1 container statuses recorded) Mar 13 00:40:27.632: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fbb6344a23cb05], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:40:30.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6805" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":246,"skipped":4160,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:40:30.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 13 00:40:34.791: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 13 00:40:34.798: INFO: Pod pod-with-poststart-http-hook still exists Mar 13 00:40:36.798: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 13 00:40:36.801: INFO: Pod pod-with-poststart-http-hook still exists Mar 13 00:40:38.798: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 13 00:40:38.802: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:40:38.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-29" for this suite. • [SLOW TEST:8.149 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4168,"failed":0} SSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:40:38.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:40:40.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9434" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4172,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:40:40.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 13 00:40:42.063: INFO: Pod name wrapped-volume-race-7e6f3656-cbc3-42c1-921d-0d8a016043c7: Found 0 pods out of 5 Mar 13 00:40:47.068: INFO: Pod name wrapped-volume-race-7e6f3656-cbc3-42c1-921d-0d8a016043c7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7e6f3656-cbc3-42c1-921d-0d8a016043c7 in namespace emptydir-wrapper-856, will wait for the garbage collector to delete the pods Mar 13 00:40:59.147: INFO: Deleting ReplicationController wrapped-volume-race-7e6f3656-cbc3-42c1-921d-0d8a016043c7 took: 6.145243ms Mar 13 00:40:59.447: INFO: Terminating ReplicationController wrapped-volume-race-7e6f3656-cbc3-42c1-921d-0d8a016043c7 pods took: 300.225781ms STEP: Creating RC which spawns configmap-volume pods Mar 13 00:41:04.983: INFO: Pod name wrapped-volume-race-05a6dc64-68c3-4c67-aa6b-5bf879cd8743: Found 0 pods out of 5 Mar 13 00:41:10.012: INFO: Pod name wrapped-volume-race-05a6dc64-68c3-4c67-aa6b-5bf879cd8743: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-05a6dc64-68c3-4c67-aa6b-5bf879cd8743 in namespace emptydir-wrapper-856, will wait for the garbage collector to delete the pods Mar 13 00:41:20.154: INFO: Deleting ReplicationController wrapped-volume-race-05a6dc64-68c3-4c67-aa6b-5bf879cd8743 took: 5.979139ms Mar 13 00:41:20.254: INFO: Terminating ReplicationController wrapped-volume-race-05a6dc64-68c3-4c67-aa6b-5bf879cd8743 pods took: 100.252194ms STEP: Creating RC which spawns configmap-volume pods Mar 13 00:41:26.795: INFO: Pod name wrapped-volume-race-881662f1-ead4-4b0c-bbf8-8f9243dc36f3: Found 0 pods out of 5 Mar 13 00:41:31.801: INFO: Pod name wrapped-volume-race-881662f1-ead4-4b0c-bbf8-8f9243dc36f3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-881662f1-ead4-4b0c-bbf8-8f9243dc36f3 in namespace emptydir-wrapper-856, will wait for the garbage collector to delete the pods Mar 13 00:41:41.879: INFO: Deleting ReplicationController wrapped-volume-race-881662f1-ead4-4b0c-bbf8-8f9243dc36f3 took: 7.854217ms Mar 13 00:41:42.280: INFO: Terminating ReplicationController wrapped-volume-race-881662f1-ead4-4b0c-bbf8-8f9243dc36f3 pods took: 400.203199ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:41:53.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-856" for this suite. • [SLOW TEST:72.132 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":249,"skipped":4179,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:41:53.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 13 00:41:53.131: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99b5559a-b31c-482c-a1c8-6e0cb4e19b6a" in namespace "downward-api-2550" to be "Succeeded or Failed" Mar 13 00:41:53.141: INFO: Pod "downwardapi-volume-99b5559a-b31c-482c-a1c8-6e0cb4e19b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.219002ms Mar 13 00:41:55.145: INFO: Pod "downwardapi-volume-99b5559a-b31c-482c-a1c8-6e0cb4e19b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014261866s Mar 13 00:41:57.149: INFO: Pod "downwardapi-volume-99b5559a-b31c-482c-a1c8-6e0cb4e19b6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018122967s STEP: Saw pod success Mar 13 00:41:57.149: INFO: Pod "downwardapi-volume-99b5559a-b31c-482c-a1c8-6e0cb4e19b6a" satisfied condition "Succeeded or Failed" Mar 13 00:41:57.151: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-99b5559a-b31c-482c-a1c8-6e0cb4e19b6a container client-container: STEP: delete the pod Mar 13 00:41:57.173: INFO: Waiting for pod downwardapi-volume-99b5559a-b31c-482c-a1c8-6e0cb4e19b6a to disappear Mar 13 00:41:57.177: INFO: Pod downwardapi-volume-99b5559a-b31c-482c-a1c8-6e0cb4e19b6a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:41:57.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2550" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4191,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:41:57.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 13 00:41:57.269: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82d0ea14-71a7-4072-9a32-d71ea0d99793" in namespace "projected-6415" to be "Succeeded or Failed" Mar 13 00:41:57.288: INFO: Pod "downwardapi-volume-82d0ea14-71a7-4072-9a32-d71ea0d99793": Phase="Pending", Reason="", readiness=false. Elapsed: 19.851359ms Mar 13 00:41:59.295: INFO: Pod "downwardapi-volume-82d0ea14-71a7-4072-9a32-d71ea0d99793": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026900627s STEP: Saw pod success Mar 13 00:41:59.296: INFO: Pod "downwardapi-volume-82d0ea14-71a7-4072-9a32-d71ea0d99793" satisfied condition "Succeeded or Failed" Mar 13 00:41:59.301: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-82d0ea14-71a7-4072-9a32-d71ea0d99793 container client-container: STEP: delete the pod Mar 13 00:41:59.348: INFO: Waiting for pod downwardapi-volume-82d0ea14-71a7-4072-9a32-d71ea0d99793 to disappear Mar 13 00:41:59.350: INFO: Pod downwardapi-volume-82d0ea14-71a7-4072-9a32-d71ea0d99793 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:41:59.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6415" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:41:59.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-2789a9e3-6546-4e8c-8d47-22f16742524e STEP: Creating a pod to test consume configMaps Mar 13 00:41:59.470: INFO: Waiting up to 5m0s for pod "pod-configmaps-52eca635-f072-42c7-ad58-1fb5f6d851a5" in namespace "configmap-4569" to be "Succeeded or Failed" Mar 13 00:41:59.488: INFO: Pod "pod-configmaps-52eca635-f072-42c7-ad58-1fb5f6d851a5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.107954ms Mar 13 00:42:01.491: INFO: Pod "pod-configmaps-52eca635-f072-42c7-ad58-1fb5f6d851a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021111537s STEP: Saw pod success Mar 13 00:42:01.491: INFO: Pod "pod-configmaps-52eca635-f072-42c7-ad58-1fb5f6d851a5" satisfied condition "Succeeded or Failed" Mar 13 00:42:01.493: INFO: Trying to get logs from node latest-worker pod pod-configmaps-52eca635-f072-42c7-ad58-1fb5f6d851a5 container configmap-volume-test: STEP: delete the pod Mar 13 00:42:01.514: INFO: Waiting for pod pod-configmaps-52eca635-f072-42c7-ad58-1fb5f6d851a5 to disappear Mar 13 00:42:01.519: INFO: Pod pod-configmaps-52eca635-f072-42c7-ad58-1fb5f6d851a5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:42:01.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4569" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4239,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:42:01.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 13 00:42:01.593: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dbed7a77-bf2f-46c3-9e5e-28f1061d95bb" in namespace "downward-api-33" to be "Succeeded or Failed" Mar 13 00:42:01.644: INFO: Pod "downwardapi-volume-dbed7a77-bf2f-46c3-9e5e-28f1061d95bb": Phase="Pending", Reason="", readiness=false. Elapsed: 51.601097ms Mar 13 00:42:03.648: INFO: Pod "downwardapi-volume-dbed7a77-bf2f-46c3-9e5e-28f1061d95bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.055402652s STEP: Saw pod success Mar 13 00:42:03.648: INFO: Pod "downwardapi-volume-dbed7a77-bf2f-46c3-9e5e-28f1061d95bb" satisfied condition "Succeeded or Failed" Mar 13 00:42:03.651: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-dbed7a77-bf2f-46c3-9e5e-28f1061d95bb container client-container: STEP: delete the pod Mar 13 00:42:03.670: INFO: Waiting for pod downwardapi-volume-dbed7a77-bf2f-46c3-9e5e-28f1061d95bb to disappear Mar 13 00:42:03.675: INFO: Pod downwardapi-volume-dbed7a77-bf2f-46c3-9e5e-28f1061d95bb no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:42:03.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-33" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4239,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:42:03.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-1c6f9c39-e520-4437-8e04-ec2f83ac9e8d STEP: Creating the pod STEP: Updating configmap configmap-test-upd-1c6f9c39-e520-4437-8e04-ec2f83ac9e8d STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:42:07.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2577" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4248,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:42:07.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:42:08.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4095" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":255,"skipped":4249,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:42:08.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 13 00:42:08.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f21517f-13be-4c27-b9c0-e1351beb5ccd" in namespace "downward-api-653" to be "Succeeded or Failed" Mar 13 00:42:08.099: INFO: Pod "downwardapi-volume-7f21517f-13be-4c27-b9c0-e1351beb5ccd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016201ms Mar 13 00:42:10.103: INFO: Pod "downwardapi-volume-7f21517f-13be-4c27-b9c0-e1351beb5ccd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007869701s STEP: Saw pod success Mar 13 00:42:10.103: INFO: Pod "downwardapi-volume-7f21517f-13be-4c27-b9c0-e1351beb5ccd" satisfied condition "Succeeded or Failed" Mar 13 00:42:10.107: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7f21517f-13be-4c27-b9c0-e1351beb5ccd container client-container: STEP: delete the pod Mar 13 00:42:10.167: INFO: Waiting for pod downwardapi-volume-7f21517f-13be-4c27-b9c0-e1351beb5ccd to disappear Mar 13 00:42:10.177: INFO: Pod downwardapi-volume-7f21517f-13be-4c27-b9c0-e1351beb5ccd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:42:10.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-653" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":256,"skipped":4274,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:42:10.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:42:10.251: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ef0b8c4e-73af-457d-8037-820f6a719268" in namespace "security-context-test-1656" to be "Succeeded or Failed" Mar 13 00:42:10.279: INFO: Pod "busybox-readonly-false-ef0b8c4e-73af-457d-8037-820f6a719268": Phase="Pending", Reason="", readiness=false. Elapsed: 28.248235ms Mar 13 00:42:12.291: INFO: Pod "busybox-readonly-false-ef0b8c4e-73af-457d-8037-820f6a719268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040373077s Mar 13 00:42:12.291: INFO: Pod "busybox-readonly-false-ef0b8c4e-73af-457d-8037-820f6a719268" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:42:12.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1656" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4281,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:42:12.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Mar 13 00:42:12.408: INFO: Waiting up to 5m0s for pod "pod-87d897a1-4a68-409b-807a-249822d40b31" in namespace "emptydir-5963" to be "Succeeded or Failed" Mar 13 00:42:12.440: INFO: Pod "pod-87d897a1-4a68-409b-807a-249822d40b31": Phase="Pending", Reason="", readiness=false. Elapsed: 32.044666ms Mar 13 00:42:14.448: INFO: Pod "pod-87d897a1-4a68-409b-807a-249822d40b31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.040013182s STEP: Saw pod success Mar 13 00:42:14.448: INFO: Pod "pod-87d897a1-4a68-409b-807a-249822d40b31" satisfied condition "Succeeded or Failed" Mar 13 00:42:14.450: INFO: Trying to get logs from node latest-worker pod pod-87d897a1-4a68-409b-807a-249822d40b31 container test-container: STEP: delete the pod Mar 13 00:42:14.497: INFO: Waiting for pod pod-87d897a1-4a68-409b-807a-249822d40b31 to disappear Mar 13 00:42:14.523: INFO: Pod pod-87d897a1-4a68-409b-807a-249822d40b31 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:42:14.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5963" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4300,"failed":0} SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:42:14.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-5r5w STEP: Creating a pod to test atomic-volume-subpath Mar 13 00:42:14.581: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-5r5w" in namespace "subpath-5393" to be "Succeeded or Failed" Mar 13 00:42:14.586: INFO: Pod "pod-subpath-test-projected-5r5w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.923606ms Mar 13 00:42:16.596: INFO: Pod "pod-subpath-test-projected-5r5w": Phase="Running", Reason="", readiness=true. Elapsed: 2.015586783s Mar 13 00:42:18.600: INFO: Pod "pod-subpath-test-projected-5r5w": Phase="Running", Reason="", readiness=true. Elapsed: 4.019103119s Mar 13 00:42:20.604: INFO: Pod "pod-subpath-test-projected-5r5w": Phase="Running", Reason="", readiness=true. Elapsed: 6.022858879s Mar 13 00:42:22.608: INFO: Pod "pod-subpath-test-projected-5r5w": Phase="Running", Reason="", readiness=true. Elapsed: 8.026794615s Mar 13 00:42:24.611: INFO: Pod "pod-subpath-test-projected-5r5w": Phase="Running", Reason="", readiness=true. Elapsed: 10.030234694s Mar 13 00:42:26.615: INFO: Pod "pod-subpath-test-projected-5r5w": Phase="Running", Reason="", readiness=true. Elapsed: 12.034173663s Mar 13 00:42:28.619: INFO: Pod "pod-subpath-test-projected-5r5w": Phase="Running", Reason="", readiness=true. Elapsed: 14.037892443s Mar 13 00:42:30.623: INFO: Pod "pod-subpath-test-projected-5r5w": Phase="Running", Reason="", readiness=true. Elapsed: 16.041943632s Mar 13 00:42:32.627: INFO: Pod "pod-subpath-test-projected-5r5w": Phase="Running", Reason="", readiness=true. Elapsed: 18.045669388s Mar 13 00:42:34.632: INFO: Pod "pod-subpath-test-projected-5r5w": Phase="Running", Reason="", readiness=true. Elapsed: 20.051333516s Mar 13 00:42:36.636: INFO: Pod "pod-subpath-test-projected-5r5w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.054938735s STEP: Saw pod success Mar 13 00:42:36.636: INFO: Pod "pod-subpath-test-projected-5r5w" satisfied condition "Succeeded or Failed" Mar 13 00:42:36.638: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-5r5w container test-container-subpath-projected-5r5w: STEP: delete the pod Mar 13 00:42:36.657: INFO: Waiting for pod pod-subpath-test-projected-5r5w to disappear Mar 13 00:42:36.662: INFO: Pod pod-subpath-test-projected-5r5w no longer exists STEP: Deleting pod pod-subpath-test-projected-5r5w Mar 13 00:42:36.662: INFO: Deleting pod "pod-subpath-test-projected-5r5w" in namespace "subpath-5393" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:42:36.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5393" for this suite. • [SLOW TEST:22.143 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":259,"skipped":4304,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:42:36.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 13 00:42:40.822: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 00:42:40.832: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 00:42:42.832: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 00:42:42.835: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 00:42:44.832: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 00:42:44.836: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 00:42:46.832: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 00:42:46.837: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 00:42:48.832: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 00:42:48.843: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 00:42:50.832: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 00:42:50.836: INFO: Pod pod-with-prestop-exec-hook still exists Mar 13 00:42:52.832: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 13 00:42:52.836: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:42:52.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8090" for this suite. • [SLOW TEST:16.178 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4319,"failed":0} [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:42:52.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 13 00:42:52.943: INFO: Waiting up to 5m0s for pod "downward-api-e57f675e-8c3c-4765-98b9-a70df43e97d4" in namespace "downward-api-3091" to be "Succeeded or Failed" Mar 13 00:42:52.965: INFO: Pod "downward-api-e57f675e-8c3c-4765-98b9-a70df43e97d4": Phase="Pending", Reason="", readiness=false. Elapsed: 21.700126ms Mar 13 00:42:54.968: INFO: Pod "downward-api-e57f675e-8c3c-4765-98b9-a70df43e97d4": Phase="Running", Reason="", readiness=true. Elapsed: 2.025456409s Mar 13 00:42:56.972: INFO: Pod "downward-api-e57f675e-8c3c-4765-98b9-a70df43e97d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02907332s STEP: Saw pod success Mar 13 00:42:56.972: INFO: Pod "downward-api-e57f675e-8c3c-4765-98b9-a70df43e97d4" satisfied condition "Succeeded or Failed" Mar 13 00:42:56.998: INFO: Trying to get logs from node latest-worker pod downward-api-e57f675e-8c3c-4765-98b9-a70df43e97d4 container dapi-container: STEP: delete the pod Mar 13 00:42:57.041: INFO: Waiting for pod downward-api-e57f675e-8c3c-4765-98b9-a70df43e97d4 to disappear Mar 13 00:42:57.052: INFO: Pod downward-api-e57f675e-8c3c-4765-98b9-a70df43e97d4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:42:57.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3091" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:42:57.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-96f5565d-9d59-430c-83f3-7cb183b702ed STEP: Creating a pod to test consume secrets Mar 13 00:42:57.128: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-00701085-a29b-4368-94aa-18b5fa35ca80" in namespace "projected-4456" to be "Succeeded or Failed" Mar 13 00:42:57.130: INFO: Pod "pod-projected-secrets-00701085-a29b-4368-94aa-18b5fa35ca80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.505105ms Mar 13 00:42:59.134: INFO: Pod "pod-projected-secrets-00701085-a29b-4368-94aa-18b5fa35ca80": Phase="Running", Reason="", readiness=true. Elapsed: 2.006477211s Mar 13 00:43:01.139: INFO: Pod "pod-projected-secrets-00701085-a29b-4368-94aa-18b5fa35ca80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010760577s STEP: Saw pod success Mar 13 00:43:01.139: INFO: Pod "pod-projected-secrets-00701085-a29b-4368-94aa-18b5fa35ca80" satisfied condition "Succeeded or Failed" Mar 13 00:43:01.141: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-00701085-a29b-4368-94aa-18b5fa35ca80 container projected-secret-volume-test: STEP: delete the pod Mar 13 00:43:01.161: INFO: Waiting for pod pod-projected-secrets-00701085-a29b-4368-94aa-18b5fa35ca80 to disappear Mar 13 00:43:01.166: INFO: Pod pod-projected-secrets-00701085-a29b-4368-94aa-18b5fa35ca80 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:43:01.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4456" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4361,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:43:01.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:43:01.252: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:43:02.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1333" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":263,"skipped":4365,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:43:02.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 13 00:43:02.772: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 13 00:43:04.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656982, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656982, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656982, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719656982, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 13 00:43:07.797: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:43:07.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3875" for this suite. STEP: Destroying namespace "webhook-3875-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.689 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":264,"skipped":4379,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:43:07.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:43:12.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1458" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4413,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:43:12.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:43:14.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9237" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4428,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:43:14.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-1877 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 13 00:43:14.329: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 13 00:43:14.363: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 13 00:43:16.366: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 13 00:43:18.366: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:43:20.366: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:43:22.367: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:43:24.367: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:43:26.366: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:43:28.366: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:43:30.366: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 13 00:43:30.369: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 13 00:43:34.397: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.110:8080/dial?request=hostname&protocol=http&host=10.244.1.109&port=8080&tries=1'] Namespace:pod-network-test-1877 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 00:43:34.397: INFO: >>> kubeConfig: /root/.kube/config I0313 00:43:34.424187 7 log.go:172] (0xc002c4e8f0) (0xc00263d0e0) Create stream I0313 00:43:34.424234 7 log.go:172] (0xc002c4e8f0) (0xc00263d0e0) Stream added, broadcasting: 1 I0313 00:43:34.426162 7 log.go:172] (0xc002c4e8f0) Reply frame received for 1 I0313 00:43:34.426185 7 log.go:172] (0xc002c4e8f0) (0xc00263d220) Create stream I0313 00:43:34.426192 7 log.go:172] (0xc002c4e8f0) (0xc00263d220) Stream added, broadcasting: 3 I0313 00:43:34.426834 7 log.go:172] (0xc002c4e8f0) Reply frame received for 3 I0313 00:43:34.426857 7 log.go:172] (0xc002c4e8f0) (0xc0024cd040) Create stream I0313 00:43:34.426868 7 log.go:172] (0xc002c4e8f0) (0xc0024cd040) Stream added, broadcasting: 5 I0313 00:43:34.427578 7 log.go:172] (0xc002c4e8f0) Reply frame received for 5 I0313 00:43:34.513572 7 log.go:172] (0xc002c4e8f0) Data frame received for 3 I0313 00:43:34.513594 7 log.go:172] (0xc00263d220) (3) Data frame handling I0313 00:43:34.513608 7 log.go:172] (0xc00263d220) (3) Data frame sent I0313 00:43:34.514036 7 log.go:172] (0xc002c4e8f0) Data frame received for 5 I0313 00:43:34.514053 7 log.go:172] (0xc0024cd040) (5) Data frame handling I0313 00:43:34.514354 7 log.go:172] (0xc002c4e8f0) Data frame received for 3 I0313 00:43:34.514365 7 log.go:172] (0xc00263d220) (3) Data frame handling I0313 00:43:34.515274 7 log.go:172] (0xc002c4e8f0) Data frame received for 1 I0313 00:43:34.515284 7 log.go:172] (0xc00263d0e0) (1) Data frame handling I0313 00:43:34.515289 7 log.go:172] (0xc00263d0e0) (1) Data frame sent I0313 00:43:34.515297 7 log.go:172] (0xc002c4e8f0) (0xc00263d0e0) Stream removed, broadcasting: 1 I0313 00:43:34.515304 7 log.go:172] (0xc002c4e8f0) Go away received I0313 00:43:34.515394 7 log.go:172] (0xc002c4e8f0) (0xc00263d0e0) Stream removed, broadcasting: 1 I0313 00:43:34.515405 7 log.go:172] (0xc002c4e8f0) (0xc00263d220) Stream removed, broadcasting: 3 I0313 00:43:34.515413 7 log.go:172] (0xc002c4e8f0) (0xc0024cd040) Stream removed, broadcasting: 5 Mar 13 00:43:34.515: INFO: Waiting for responses: map[] Mar 13 00:43:34.517: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.110:8080/dial?request=hostname&protocol=http&host=10.244.2.70&port=8080&tries=1'] Namespace:pod-network-test-1877 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 00:43:34.517: INFO: >>> kubeConfig: /root/.kube/config I0313 00:43:34.539922 7 log.go:172] (0xc000dd5080) (0xc0024cd540) Create stream I0313 00:43:34.539945 7 log.go:172] (0xc000dd5080) (0xc0024cd540) Stream added, broadcasting: 1 I0313 00:43:34.541518 7 log.go:172] (0xc000dd5080) Reply frame received for 1 I0313 00:43:34.541556 7 log.go:172] (0xc000dd5080) (0xc0024cd5e0) Create stream I0313 00:43:34.541567 7 log.go:172] (0xc000dd5080) (0xc0024cd5e0) Stream added, broadcasting: 3 I0313 00:43:34.542268 7 log.go:172] (0xc000dd5080) Reply frame received for 3 I0313 00:43:34.542287 7 log.go:172] (0xc000dd5080) (0xc0024cd720) Create stream I0313 00:43:34.542297 7 log.go:172] (0xc000dd5080) (0xc0024cd720) Stream added, broadcasting: 5 I0313 00:43:34.543203 7 log.go:172] (0xc000dd5080) Reply frame received for 5 I0313 00:43:34.614896 7 log.go:172] (0xc000dd5080) Data frame received for 3 I0313 00:43:34.614912 7 log.go:172] (0xc0024cd5e0) (3) Data frame handling I0313 00:43:34.614920 7 log.go:172] (0xc0024cd5e0) (3) Data frame sent I0313 00:43:34.615243 7 log.go:172] (0xc000dd5080) Data frame received for 5 I0313 00:43:34.615253 7 log.go:172] (0xc0024cd720) (5) Data frame handling I0313 00:43:34.615514 7 log.go:172] (0xc000dd5080) Data frame received for 3 I0313 00:43:34.615562 7 log.go:172] (0xc0024cd5e0) (3) Data frame handling I0313 00:43:34.616530 7 log.go:172] (0xc000dd5080) Data frame received for 1 I0313 00:43:34.616548 7 log.go:172] (0xc0024cd540) (1) Data frame handling I0313 00:43:34.616564 7 log.go:172] (0xc0024cd540) (1) Data frame sent I0313 00:43:34.616588 7 log.go:172] (0xc000dd5080) (0xc0024cd540) Stream removed, broadcasting: 1 I0313 00:43:34.616605 7 log.go:172] (0xc000dd5080) Go away received I0313 00:43:34.616705 7 log.go:172] (0xc000dd5080) (0xc0024cd540) Stream removed, broadcasting: 1 I0313 00:43:34.616717 7 log.go:172] (0xc000dd5080) (0xc0024cd5e0) Stream removed, broadcasting: 3 I0313 00:43:34.616728 7 log.go:172] (0xc000dd5080) (0xc0024cd720) Stream removed, broadcasting: 5 Mar 13 00:43:34.616: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:43:34.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1877" for this suite. • [SLOW TEST:20.403 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4446,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:43:34.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 13 00:43:34.675: INFO: Waiting up to 5m0s for pod "pod-9a683508-8fb2-44e7-97ef-8765a54a1f31" in namespace "emptydir-3386" to be "Succeeded or Failed" Mar 13 00:43:34.678: INFO: Pod "pod-9a683508-8fb2-44e7-97ef-8765a54a1f31": Phase="Pending", Reason="", readiness=false. Elapsed: 3.270396ms Mar 13 00:43:36.681: INFO: Pod "pod-9a683508-8fb2-44e7-97ef-8765a54a1f31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006599123s STEP: Saw pod success Mar 13 00:43:36.681: INFO: Pod "pod-9a683508-8fb2-44e7-97ef-8765a54a1f31" satisfied condition "Succeeded or Failed" Mar 13 00:43:36.684: INFO: Trying to get logs from node latest-worker pod pod-9a683508-8fb2-44e7-97ef-8765a54a1f31 container test-container: STEP: delete the pod Mar 13 00:43:36.701: INFO: Waiting for pod pod-9a683508-8fb2-44e7-97ef-8765a54a1f31 to disappear Mar 13 00:43:36.751: INFO: Pod pod-9a683508-8fb2-44e7-97ef-8765a54a1f31 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:43:36.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3386" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4460,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:43:36.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-ef9d0bb3-df7b-4e54-a6f1-96429c79266b STEP: Creating a pod to test consume secrets Mar 13 00:43:36.857: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b07f667b-f336-4ef5-904c-843a5bb046e2" in namespace "projected-351" to be "Succeeded or Failed" Mar 13 00:43:36.874: INFO: Pod "pod-projected-secrets-b07f667b-f336-4ef5-904c-843a5bb046e2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.04636ms Mar 13 00:43:38.878: INFO: Pod "pod-projected-secrets-b07f667b-f336-4ef5-904c-843a5bb046e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020624075s STEP: Saw pod success Mar 13 00:43:38.878: INFO: Pod "pod-projected-secrets-b07f667b-f336-4ef5-904c-843a5bb046e2" satisfied condition "Succeeded or Failed" Mar 13 00:43:38.880: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-b07f667b-f336-4ef5-904c-843a5bb046e2 container projected-secret-volume-test: STEP: delete the pod Mar 13 00:43:38.923: INFO: Waiting for pod pod-projected-secrets-b07f667b-f336-4ef5-904c-843a5bb046e2 to disappear Mar 13 00:43:38.952: INFO: Pod pod-projected-secrets-b07f667b-f336-4ef5-904c-843a5bb046e2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:43:38.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-351" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:43:38.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 13 00:43:39.007: INFO: Waiting up to 5m0s for pod "pod-9e282baf-f107-4d7c-9d82-09e85b195ecc" in namespace "emptydir-8312" to be "Succeeded or Failed" Mar 13 00:43:39.011: INFO: Pod "pod-9e282baf-f107-4d7c-9d82-09e85b195ecc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.935133ms Mar 13 00:43:41.015: INFO: Pod "pod-9e282baf-f107-4d7c-9d82-09e85b195ecc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008036852s STEP: Saw pod success Mar 13 00:43:41.016: INFO: Pod "pod-9e282baf-f107-4d7c-9d82-09e85b195ecc" satisfied condition "Succeeded or Failed" Mar 13 00:43:41.018: INFO: Trying to get logs from node latest-worker2 pod pod-9e282baf-f107-4d7c-9d82-09e85b195ecc container test-container: STEP: delete the pod Mar 13 00:43:41.040: INFO: Waiting for pod pod-9e282baf-f107-4d7c-9d82-09e85b195ecc to disappear Mar 13 00:43:41.055: INFO: Pod pod-9e282baf-f107-4d7c-9d82-09e85b195ecc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:43:41.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8312" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4601,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:43:41.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 13 00:43:41.147: INFO: Creating ReplicaSet my-hostname-basic-77fdfe13-c857-4eae-aeec-c563543cc749 Mar 13 00:43:41.173: INFO: Pod name my-hostname-basic-77fdfe13-c857-4eae-aeec-c563543cc749: Found 0 pods out of 1 Mar 13 00:43:46.197: INFO: Pod name my-hostname-basic-77fdfe13-c857-4eae-aeec-c563543cc749: Found 1 pods out of 1 Mar 13 00:43:46.197: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-77fdfe13-c857-4eae-aeec-c563543cc749" is running Mar 13 00:43:46.199: INFO: Pod "my-hostname-basic-77fdfe13-c857-4eae-aeec-c563543cc749-vgqfj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 00:43:41 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 00:43:42 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 00:43:42 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-13 00:43:41 +0000 UTC Reason: Message:}]) Mar 13 00:43:46.200: INFO: Trying to dial the pod Mar 13 00:43:51.211: INFO: Controller my-hostname-basic-77fdfe13-c857-4eae-aeec-c563543cc749: Got expected result from replica 1 [my-hostname-basic-77fdfe13-c857-4eae-aeec-c563543cc749-vgqfj]: "my-hostname-basic-77fdfe13-c857-4eae-aeec-c563543cc749-vgqfj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:43:51.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9223" for this suite. • [SLOW TEST:10.155 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":271,"skipped":4625,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:43:51.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:44:07.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-190" for this suite. • [SLOW TEST:16.155 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":272,"skipped":4635,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:44:07.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 13 00:44:11.491: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8417 PodName:pod-sharedvolume-f7547fa9-d79e-49cb-a8ff-37a3ece38921 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 00:44:11.491: INFO: >>> kubeConfig: /root/.kube/config I0313 00:44:11.518813 7 log.go:172] (0xc00271def0) (0xc00245b9a0) Create stream I0313 00:44:11.518857 7 log.go:172] (0xc00271def0) (0xc00245b9a0) Stream added, broadcasting: 1 I0313 00:44:11.520777 7 log.go:172] (0xc00271def0) Reply frame received for 1 I0313 00:44:11.520817 7 log.go:172] (0xc00271def0) (0xc00095bf40) Create stream I0313 00:44:11.520831 7 log.go:172] (0xc00271def0) (0xc00095bf40) Stream added, broadcasting: 3 I0313 00:44:11.521658 7 log.go:172] (0xc00271def0) Reply frame received for 3 I0313 00:44:11.521703 7 log.go:172] (0xc00271def0) (0xc00245ba40) Create stream I0313 00:44:11.521716 7 log.go:172] (0xc00271def0) (0xc00245ba40) Stream added, broadcasting: 5 I0313 00:44:11.523624 7 log.go:172] (0xc00271def0) Reply frame received for 5 I0313 00:44:11.593585 7 log.go:172] (0xc00271def0) Data frame received for 3 I0313 00:44:11.593617 7 log.go:172] (0xc00095bf40) (3) Data frame handling I0313 00:44:11.593628 7 log.go:172] (0xc00095bf40) (3) Data frame sent I0313 00:44:11.593635 7 log.go:172] (0xc00271def0) Data frame received for 3 I0313 00:44:11.593646 7 log.go:172] (0xc00095bf40) (3) Data frame handling I0313 00:44:11.593695 7 log.go:172] (0xc00271def0) Data frame received for 5 I0313 00:44:11.593717 7 log.go:172] (0xc00245ba40) (5) Data frame handling I0313 00:44:11.595114 7 log.go:172] (0xc00271def0) Data frame received for 1 I0313 00:44:11.595130 7 log.go:172] (0xc00245b9a0) (1) Data frame handling I0313 00:44:11.595144 7 log.go:172] (0xc00245b9a0) (1) Data frame sent I0313 00:44:11.595155 7 log.go:172] (0xc00271def0) (0xc00245b9a0) Stream removed, broadcasting: 1 I0313 00:44:11.595235 7 log.go:172] (0xc00271def0) (0xc00245b9a0) Stream removed, broadcasting: 1 I0313 00:44:11.595250 7 log.go:172] (0xc00271def0) (0xc00095bf40) Stream removed, broadcasting: 3 I0313 00:44:11.595422 7 log.go:172] (0xc00271def0) (0xc00245ba40) Stream removed, broadcasting: 5 I0313 00:44:11.595490 7 log.go:172] (0xc00271def0) Go away received Mar 13 00:44:11.595: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:44:11.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8417" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":273,"skipped":4667,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:44:11.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-4557 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 13 00:44:11.693: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 13 00:44:11.726: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 13 00:44:13.729: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:44:15.730: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:44:17.729: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:44:19.731: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:44:21.729: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:44:23.730: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:44:25.730: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 13 00:44:27.730: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 13 00:44:27.735: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 13 00:44:29.768: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.114:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4557 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 00:44:29.768: INFO: >>> kubeConfig: /root/.kube/config I0313 00:44:29.803009 7 log.go:172] (0xc002c4e420) (0xc0013bb5e0) Create stream I0313 00:44:29.803068 7 log.go:172] (0xc002c4e420) (0xc0013bb5e0) Stream added, broadcasting: 1 I0313 00:44:29.806046 7 log.go:172] (0xc002c4e420) Reply frame received for 1 I0313 00:44:29.806092 7 log.go:172] (0xc002c4e420) (0xc002079540) Create stream I0313 00:44:29.806106 7 log.go:172] (0xc002c4e420) (0xc002079540) Stream added, broadcasting: 3 I0313 00:44:29.807356 7 log.go:172] (0xc002c4e420) Reply frame received for 3 I0313 00:44:29.807416 7 log.go:172] (0xc002c4e420) (0xc000d16000) Create stream I0313 00:44:29.807436 7 log.go:172] (0xc002c4e420) (0xc000d16000) Stream added, broadcasting: 5 I0313 00:44:29.808818 7 log.go:172] (0xc002c4e420) Reply frame received for 5 I0313 00:44:29.898329 7 log.go:172] (0xc002c4e420) Data frame received for 5 I0313 00:44:29.898376 7 log.go:172] (0xc000d16000) (5) Data frame handling I0313 00:44:29.898412 7 log.go:172] (0xc002c4e420) Data frame received for 3 I0313 00:44:29.898467 7 log.go:172] (0xc002079540) (3) Data frame handling I0313 00:44:29.898502 7 log.go:172] (0xc002079540) (3) Data frame sent I0313 00:44:29.898523 7 log.go:172] (0xc002c4e420) Data frame received for 3 I0313 00:44:29.898540 7 log.go:172] (0xc002079540) (3) Data frame handling I0313 00:44:29.899998 7 log.go:172] (0xc002c4e420) Data frame received for 1 I0313 00:44:29.900031 7 log.go:172] (0xc0013bb5e0) (1) Data frame handling I0313 00:44:29.900041 7 log.go:172] (0xc0013bb5e0) (1) Data frame sent I0313 00:44:29.900051 7 log.go:172] (0xc002c4e420) (0xc0013bb5e0) Stream removed, broadcasting: 1 I0313 00:44:29.900068 7 log.go:172] (0xc002c4e420) Go away received I0313 00:44:29.900271 7 log.go:172] (0xc002c4e420) (0xc0013bb5e0) Stream removed, broadcasting: 1 I0313 00:44:29.900301 7 log.go:172] (0xc002c4e420) (0xc002079540) Stream removed, broadcasting: 3 I0313 00:44:29.900321 7 log.go:172] (0xc002c4e420) (0xc000d16000) Stream removed, broadcasting: 5 Mar 13 00:44:29.900: INFO: Found all expected endpoints: [netserver-0] Mar 13 00:44:29.903: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.73:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4557 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 13 00:44:29.903: INFO: >>> kubeConfig: /root/.kube/config I0313 00:44:29.931539 7 log.go:172] (0xc002dc4210) (0xc002079d60) Create stream I0313 00:44:29.931565 7 log.go:172] (0xc002dc4210) (0xc002079d60) Stream added, broadcasting: 1 I0313 00:44:29.933653 7 log.go:172] (0xc002dc4210) Reply frame received for 1 I0313 00:44:29.933688 7 log.go:172] (0xc002dc4210) (0xc0013bb900) Create stream I0313 00:44:29.933699 7 log.go:172] (0xc002dc4210) (0xc0013bb900) Stream added, broadcasting: 3 I0313 00:44:29.934552 7 log.go:172] (0xc002dc4210) Reply frame received for 3 I0313 00:44:29.934590 7 log.go:172] (0xc002dc4210) (0xc001361680) Create stream I0313 00:44:29.934604 7 log.go:172] (0xc002dc4210) (0xc001361680) Stream added, broadcasting: 5 I0313 00:44:29.935765 7 log.go:172] (0xc002dc4210) Reply frame received for 5 I0313 00:44:30.009077 7 log.go:172] (0xc002dc4210) Data frame received for 3 I0313 00:44:30.009101 7 log.go:172] (0xc0013bb900) (3) Data frame handling I0313 00:44:30.009115 7 log.go:172] (0xc0013bb900) (3) Data frame sent I0313 00:44:30.009311 7 log.go:172] (0xc002dc4210) Data frame received for 3 I0313 00:44:30.009334 7 log.go:172] (0xc0013bb900) (3) Data frame handling I0313 00:44:30.009363 7 log.go:172] (0xc002dc4210) Data frame received for 5 I0313 00:44:30.009377 7 log.go:172] (0xc001361680) (5) Data frame handling I0313 00:44:30.010515 7 log.go:172] (0xc002dc4210) Data frame received for 1 I0313 00:44:30.010547 7 log.go:172] (0xc002079d60) (1) Data frame handling I0313 00:44:30.010559 7 log.go:172] (0xc002079d60) (1) Data frame sent I0313 00:44:30.010575 7 log.go:172] (0xc002dc4210) (0xc002079d60) Stream removed, broadcasting: 1 I0313 00:44:30.010595 7 log.go:172] (0xc002dc4210) Go away received I0313 00:44:30.010737 7 log.go:172] (0xc002dc4210) (0xc002079d60) Stream removed, broadcasting: 1 I0313 00:44:30.010761 7 log.go:172] (0xc002dc4210) (0xc0013bb900) Stream removed, broadcasting: 3 I0313 00:44:30.010782 7 log.go:172] (0xc002dc4210) (0xc001361680) Stream removed, broadcasting: 5 Mar 13 00:44:30.010: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:44:30.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4557" for this suite. • [SLOW TEST:18.426 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4677,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 13 00:44:30.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Mar 13 00:44:30.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config cluster-info' Mar 13 00:44:30.198: INFO: stderr: "" Mar 13 00:44:30.198: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32776\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32776/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 13 00:44:30.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8367" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":275,"skipped":4716,"failed":0} SMar 13 00:44:30.222: INFO: Running AfterSuite actions on all nodes Mar 13 00:44:30.222: INFO: Running AfterSuite actions on node 1 Mar 13 00:44:30.222: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4128.670 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS