(200; 3.399338ms)
[AfterEach] version v1
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:35:52.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4812" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":2,"skipped":53,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
works for CRD preserving unknown fields at the schema root [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:35:52.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 12 23:35:53.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Mar 12 23:35:55.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3175 create -f -'
Mar 12 23:35:57.963: INFO: stderr: ""
Mar 12 23:35:57.963: INFO: stdout: "e2e-test-crd-publish-openapi-792-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Mar 12 23:35:57.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3175 delete e2e-test-crd-publish-openapi-792-crds test-cr'
Mar 12 23:35:58.090: INFO: stderr: ""
Mar 12 23:35:58.090: INFO: stdout: "e2e-test-crd-publish-openapi-792-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Mar 12 23:35:58.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3175 apply -f -'
Mar 12 23:35:58.314: INFO: stderr: ""
Mar 12 23:35:58.314: INFO: stdout: "e2e-test-crd-publish-openapi-792-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Mar 12 23:35:58.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3175 delete e2e-test-crd-publish-openapi-792-crds test-cr'
Mar 12 23:35:58.406: INFO: stderr: ""
Mar 12 23:35:58.406: INFO: stdout: "e2e-test-crd-publish-openapi-792-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Mar 12 23:35:58.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-792-crds'
Mar 12 23:35:58.604: INFO: stderr: ""
Mar 12 23:35:58.604: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-792-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:36:01.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3175" for this suite.
• [SLOW TEST:8.612 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
works for CRD preserving unknown fields at the schema root [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":3,"skipped":61,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector
should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:36:01.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0312 23:36:07.627991 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 12 23:36:07.628: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:
[AfterEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:36:07.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9000" for this suite.
• [SLOW TEST:6.089 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":4,"skipped":105,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance]
should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:36:07.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Mar 12 23:36:07.672: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:36:11.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2373" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":5,"skipped":115,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod
should be possible to delete [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:36:11.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:36:11.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1753" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":141,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
Should recreate evicted statefulset [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:36:11.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6171
[It] Should recreate evicted statefulset [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-6171
STEP: Creating statefulset with conflicting port in namespace statefulset-6171
STEP: Waiting until pod test-pod will start running in namespace statefulset-6171
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6171
Mar 12 23:36:13.404: INFO: Observed stateful pod in namespace: statefulset-6171, name: ss-0, uid: fa4cb0e0-6551-4ce4-a532-8fe5f8ca2b87, status phase: Pending. Waiting for statefulset controller to delete.
Mar 12 23:36:22.461: INFO: Observed stateful pod in namespace: statefulset-6171, name: ss-0, uid: fa4cb0e0-6551-4ce4-a532-8fe5f8ca2b87, status phase: Failed. Waiting for statefulset controller to delete.
Mar 12 23:36:22.465: INFO: Observed stateful pod in namespace: statefulset-6171, name: ss-0, uid: fa4cb0e0-6551-4ce4-a532-8fe5f8ca2b87, status phase: Failed. Waiting for statefulset controller to delete.
Mar 12 23:36:22.506: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6171
STEP: Removing pod with conflicting port in namespace statefulset-6171
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6171 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Mar 12 23:36:24.596: INFO: Deleting all statefulset in ns statefulset-6171
Mar 12 23:36:24.598: INFO: Scaling statefulset ss to 0
Mar 12 23:36:34.626: INFO: Waiting for statefulset status.replicas updated to 0
Mar 12 23:36:34.628: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:36:34.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6171" for this suite.
• [SLOW TEST:23.334 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Should recreate evicted statefulset [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":7,"skipped":143,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch
should add annotations for pods in rc [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:36:34.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Mar 12 23:36:35.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4344'
Mar 12 23:36:35.585: INFO: stderr: ""
Mar 12 23:36:35.585: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Mar 12 23:36:36.588: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 12 23:36:36.588: INFO: Found 0 / 1
Mar 12 23:36:37.589: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 12 23:36:37.589: INFO: Found 1 / 1
Mar 12 23:36:37.589: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1
STEP: patching all pods
Mar 12 23:36:37.592: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 12 23:36:37.592: INFO: ForEach: Found 1 pods from the filter. Now looping through them.
Mar 12 23:36:37.592: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config patch pod agnhost-master-csb26 --namespace=kubectl-4344 -p {"metadata":{"annotations":{"x":"y"}}}'
Mar 12 23:36:37.705: INFO: stderr: ""
Mar 12 23:36:37.705: INFO: stdout: "pod/agnhost-master-csb26 patched\n"
STEP: checking annotations
Mar 12 23:36:37.711: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 12 23:36:37.711: INFO: ForEach: Found 1 pods from the filter. Now looping through them.
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:36:37.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4344" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":8,"skipped":165,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API
should provide host IP as an env var [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:36:37.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Mar 12 23:36:37.791: INFO: Waiting up to 5m0s for pod "downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7" in namespace "downward-api-8536" to be "Succeeded or Failed"
Mar 12 23:36:37.795: INFO: Pod "downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.762187ms
Mar 12 23:36:39.824: INFO: Pod "downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.032533752s
STEP: Saw pod success
Mar 12 23:36:39.824: INFO: Pod "downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7" satisfied condition "Succeeded or Failed"
Mar 12 23:36:39.837: INFO: Trying to get logs from node latest-worker2 pod downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7 container dapi-container:
STEP: delete the pod
Mar 12 23:36:39.943: INFO: Waiting for pod downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7 to disappear
Mar 12 23:36:39.951: INFO: Pod downward-api-244a62ef-8651-4651-bb21-d01bae0fe2b7 no longer exists
[AfterEach] [sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:36:39.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8536" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":189,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook
should execute prestop http hook properly [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:36:39.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Mar 12 23:36:46.119: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 12 23:36:46.127: INFO: Pod pod-with-prestop-http-hook still exists
Mar 12 23:36:48.127: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 12 23:36:48.130: INFO: Pod pod-with-prestop-http-hook still exists
Mar 12 23:36:50.127: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 12 23:36:50.130: INFO: Pod pod-with-prestop-http-hook still exists
Mar 12 23:36:52.127: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Mar 12 23:36:52.130: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:36:52.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9472" for this suite.
• [SLOW TEST:12.192 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
when create a pod with lifecycle hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
should execute prestop http hook properly [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":198,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
should perform rolling updates and roll backs of template modifications [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:36:52.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-2475
[It] should perform rolling updates and roll backs of template modifications [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Mar 12 23:36:52.240: INFO: Found 0 stateful pods, waiting for 3
Mar 12 23:37:02.250: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 12 23:37:02.250: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 12 23:37:02.250: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Mar 12 23:37:02.258: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2475 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 12 23:37:02.462: INFO: stderr: "I0312 23:37:02.380130 188 log.go:172] (0xc000b71290) (0xc000b46500) Create stream\nI0312 23:37:02.380170 188 log.go:172] (0xc000b71290) (0xc000b46500) Stream added, broadcasting: 1\nI0312 23:37:02.383796 188 log.go:172] (0xc000b71290) Reply frame received for 1\nI0312 23:37:02.383855 188 log.go:172] (0xc000b71290) (0xc000b14280) Create stream\nI0312 23:37:02.383872 188 log.go:172] (0xc000b71290) (0xc000b14280) Stream added, broadcasting: 3\nI0312 23:37:02.385145 188 log.go:172] (0xc000b71290) Reply frame received for 3\nI0312 23:37:02.385179 188 log.go:172] (0xc000b71290) (0xc000b465a0) Create stream\nI0312 23:37:02.385192 188 log.go:172] (0xc000b71290) (0xc000b465a0) Stream added, broadcasting: 5\nI0312 23:37:02.386343 188 log.go:172] (0xc000b71290) Reply frame received for 5\nI0312 23:37:02.440452 188 log.go:172] (0xc000b71290) Data frame received for 5\nI0312 23:37:02.440479 188 log.go:172] (0xc000b465a0) (5) Data frame handling\nI0312 23:37:02.440488 188 log.go:172] (0xc000b465a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:37:02.457621 188 log.go:172] (0xc000b71290) Data frame received for 3\nI0312 23:37:02.457643 188 log.go:172] (0xc000b14280) (3) Data frame handling\nI0312 23:37:02.457662 188 log.go:172] (0xc000b14280) (3) Data frame sent\nI0312 23:37:02.457671 188 log.go:172] (0xc000b71290) Data frame received for 3\nI0312 23:37:02.457679 188 log.go:172] (0xc000b14280) (3) Data frame handling\nI0312 23:37:02.458085 188 log.go:172] (0xc000b71290) Data frame received for 5\nI0312 23:37:02.458103 188 log.go:172] (0xc000b465a0) (5) Data frame handling\nI0312 23:37:02.459424 188 log.go:172] (0xc000b71290) Data frame received for 1\nI0312 23:37:02.459442 188 log.go:172] (0xc000b46500) (1) Data frame handling\nI0312 23:37:02.459449 188 log.go:172] (0xc000b46500) (1) Data frame sent\nI0312 23:37:02.459458 188 log.go:172] (0xc000b71290) (0xc000b46500) Stream removed, broadcasting: 1\nI0312 23:37:02.459475 188 log.go:172] (0xc000b71290) Go away received\nI0312 23:37:02.459723 188 log.go:172] (0xc000b71290) (0xc000b46500) Stream removed, broadcasting: 1\nI0312 23:37:02.459737 188 log.go:172] (0xc000b71290) (0xc000b14280) Stream removed, broadcasting: 3\nI0312 23:37:02.459743 188 log.go:172] (0xc000b71290) (0xc000b465a0) Stream removed, broadcasting: 5\n"
Mar 12 23:37:02.462: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 12 23:37:02.462: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Mar 12 23:37:12.498: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Mar 12 23:37:22.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2475 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 12 23:37:22.758: INFO: stderr: "I0312 23:37:22.679423 209 log.go:172] (0xc000a5d760) (0xc000ad2960) Create stream\nI0312 23:37:22.679473 209 log.go:172] (0xc000a5d760) (0xc000ad2960) Stream added, broadcasting: 1\nI0312 23:37:22.683503 209 log.go:172] (0xc000a5d760) Reply frame received for 1\nI0312 23:37:22.683535 209 log.go:172] (0xc000a5d760) (0xc0005817c0) Create stream\nI0312 23:37:22.683543 209 log.go:172] (0xc000a5d760) (0xc0005817c0) Stream added, broadcasting: 3\nI0312 23:37:22.684337 209 log.go:172] (0xc000a5d760) Reply frame received for 3\nI0312 23:37:22.684368 209 log.go:172] (0xc000a5d760) (0xc000442be0) Create stream\nI0312 23:37:22.684381 209 log.go:172] (0xc000a5d760) (0xc000442be0) Stream added, broadcasting: 5\nI0312 23:37:22.685242 209 log.go:172] (0xc000a5d760) Reply frame received for 5\nI0312 23:37:22.753171 209 log.go:172] (0xc000a5d760) Data frame received for 3\nI0312 23:37:22.753204 209 log.go:172] (0xc0005817c0) (3) Data frame handling\nI0312 23:37:22.753219 209 log.go:172] (0xc0005817c0) (3) Data frame sent\nI0312 23:37:22.753324 209 log.go:172] (0xc000a5d760) Data frame received for 5\nI0312 23:37:22.753335 209 log.go:172] (0xc000442be0) (5) Data frame handling\nI0312 23:37:22.753343 209 log.go:172] (0xc000442be0) (5) Data frame sent\nI0312 23:37:22.753355 209 log.go:172] (0xc000a5d760) Data frame received for 5\nI0312 23:37:22.753361 209 log.go:172] (0xc000442be0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 23:37:22.753393 209 log.go:172] (0xc000a5d760) Data frame received for 3\nI0312 23:37:22.753416 209 log.go:172] (0xc0005817c0) (3) Data frame handling\nI0312 23:37:22.754455 209 log.go:172] (0xc000a5d760) Data frame received for 1\nI0312 23:37:22.754481 209 log.go:172] (0xc000ad2960) (1) Data frame handling\nI0312 23:37:22.754500 209 log.go:172] (0xc000ad2960) (1) Data frame sent\nI0312 23:37:22.754511 209 log.go:172] (0xc000a5d760) (0xc000ad2960) Stream removed, broadcasting: 1\nI0312 23:37:22.754533 209 log.go:172] (0xc000a5d760) Go away received\nI0312 23:37:22.754852 209 log.go:172] (0xc000a5d760) (0xc000ad2960) Stream removed, broadcasting: 1\nI0312 23:37:22.754878 209 log.go:172] (0xc000a5d760) (0xc0005817c0) Stream removed, broadcasting: 3\nI0312 23:37:22.754886 209 log.go:172] (0xc000a5d760) (0xc000442be0) Stream removed, broadcasting: 5\n"
Mar 12 23:37:22.758: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 12 23:37:22.758: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
Mar 12 23:37:32.776: INFO: Waiting for StatefulSet statefulset-2475/ss2 to complete update
Mar 12 23:37:32.776: INFO: Waiting for Pod statefulset-2475/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Mar 12 23:37:42.782: INFO: Waiting for StatefulSet statefulset-2475/ss2 to complete update
STEP: Rolling back to a previous revision
Mar 12 23:37:52.783: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2475 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 12 23:37:53.031: INFO: stderr: "I0312 23:37:52.925928 230 log.go:172] (0xc000ad7340) (0xc000b22820) Create stream\nI0312 23:37:52.925978 230 log.go:172] (0xc000ad7340) (0xc000b22820) Stream added, broadcasting: 1\nI0312 23:37:52.930036 230 log.go:172] (0xc000ad7340) Reply frame received for 1\nI0312 23:37:52.930078 230 log.go:172] (0xc000ad7340) (0xc00068f680) Create stream\nI0312 23:37:52.930086 230 log.go:172] (0xc000ad7340) (0xc00068f680) Stream added, broadcasting: 3\nI0312 23:37:52.930925 230 log.go:172] (0xc000ad7340) Reply frame received for 3\nI0312 23:37:52.930956 230 log.go:172] (0xc000ad7340) (0xc000538aa0) Create stream\nI0312 23:37:52.930969 230 log.go:172] (0xc000ad7340) (0xc000538aa0) Stream added, broadcasting: 5\nI0312 23:37:52.931769 230 log.go:172] (0xc000ad7340) Reply frame received for 5\nI0312 23:37:53.009572 230 log.go:172] (0xc000ad7340) Data frame received for 5\nI0312 23:37:53.009597 230 log.go:172] (0xc000538aa0) (5) Data frame handling\nI0312 23:37:53.009612 230 log.go:172] (0xc000538aa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:37:53.026194 230 log.go:172] (0xc000ad7340) Data frame received for 5\nI0312 23:37:53.026211 230 log.go:172] (0xc000538aa0) (5) Data frame handling\nI0312 23:37:53.026236 230 log.go:172] (0xc000ad7340) Data frame received for 3\nI0312 23:37:53.026250 230 log.go:172] (0xc00068f680) (3) Data frame handling\nI0312 23:37:53.026267 230 log.go:172] (0xc00068f680) (3) Data frame sent\nI0312 23:37:53.026281 230 log.go:172] (0xc000ad7340) Data frame received for 3\nI0312 23:37:53.026294 230 log.go:172] (0xc00068f680) (3) Data frame handling\nI0312 23:37:53.027920 230 log.go:172] (0xc000ad7340) Data frame received for 1\nI0312 23:37:53.027940 230 log.go:172] (0xc000b22820) (1) Data frame handling\nI0312 23:37:53.027960 230 log.go:172] (0xc000b22820) (1) Data frame sent\nI0312 23:37:53.027976 230 log.go:172] (0xc000ad7340) (0xc000b22820) Stream removed, broadcasting: 1\nI0312 23:37:53.027998 230 log.go:172] (0xc000ad7340) Go away received\nI0312 23:37:53.028324 230 log.go:172] (0xc000ad7340) (0xc000b22820) Stream removed, broadcasting: 1\nI0312 23:37:53.028343 230 log.go:172] (0xc000ad7340) (0xc00068f680) Stream removed, broadcasting: 3\nI0312 23:37:53.028352 230 log.go:172] (0xc000ad7340) (0xc000538aa0) Stream removed, broadcasting: 5\n"
Mar 12 23:37:53.031: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 12 23:37:53.032: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
Mar 12 23:38:03.064: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Mar 12 23:38:13.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2475 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 12 23:38:13.299: INFO: stderr: "I0312 23:38:13.233390 252 log.go:172] (0xc000a35080) (0xc0009528c0) Create stream\nI0312 23:38:13.233432 252 log.go:172] (0xc000a35080) (0xc0009528c0) Stream added, broadcasting: 1\nI0312 23:38:13.237284 252 log.go:172] (0xc000a35080) Reply frame received for 1\nI0312 23:38:13.237311 252 log.go:172] (0xc000a35080) (0xc0007d9540) Create stream\nI0312 23:38:13.237317 252 log.go:172] (0xc000a35080) (0xc0007d9540) Stream added, broadcasting: 3\nI0312 23:38:13.238593 252 log.go:172] (0xc000a35080) Reply frame received for 3\nI0312 23:38:13.238645 252 log.go:172] (0xc000a35080) (0xc000608960) Create stream\nI0312 23:38:13.238657 252 log.go:172] (0xc000a35080) (0xc000608960) Stream added, broadcasting: 5\nI0312 23:38:13.241119 252 log.go:172] (0xc000a35080) Reply frame received for 5\nI0312 23:38:13.295265 252 log.go:172] (0xc000a35080) Data frame received for 3\nI0312 23:38:13.295296 252 log.go:172] (0xc000a35080) Data frame received for 5\nI0312 23:38:13.295314 252 log.go:172] (0xc000608960) (5) Data frame handling\nI0312 23:38:13.295325 252 log.go:172] (0xc000608960) (5) Data frame sent\nI0312 23:38:13.295331 252 log.go:172] (0xc000a35080) Data frame received for 5\nI0312 23:38:13.295338 252 log.go:172] (0xc000608960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 23:38:13.295355 252 log.go:172] (0xc0007d9540) (3) Data frame handling\nI0312 23:38:13.295361 252 log.go:172] (0xc0007d9540) (3) Data frame sent\nI0312 23:38:13.295365 252 log.go:172] (0xc000a35080) Data frame received for 3\nI0312 23:38:13.295370 252 log.go:172] (0xc0007d9540) (3) Data frame handling\nI0312 23:38:13.296628 252 log.go:172] (0xc000a35080) Data frame received for 1\nI0312 23:38:13.296642 252 log.go:172] (0xc0009528c0) (1) Data frame handling\nI0312 23:38:13.296648 252 log.go:172] (0xc0009528c0) (1) Data frame sent\nI0312 23:38:13.296656 252 log.go:172] (0xc000a35080) (0xc0009528c0) Stream removed, broadcasting: 1\nI0312 23:38:13.296667 252 log.go:172] (0xc000a35080) Go away received\nI0312 23:38:13.296989 252 log.go:172] (0xc000a35080) (0xc0009528c0) Stream removed, broadcasting: 1\nI0312 23:38:13.297008 252 log.go:172] (0xc000a35080) (0xc0007d9540) Stream removed, broadcasting: 3\nI0312 23:38:13.297015 252 log.go:172] (0xc000a35080) (0xc000608960) Stream removed, broadcasting: 5\n"
Mar 12 23:38:13.299: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 12 23:38:13.299: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
Mar 12 23:38:23.315: INFO: Waiting for StatefulSet statefulset-2475/ss2 to complete update
Mar 12 23:38:23.315: INFO: Waiting for Pod statefulset-2475/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Mar 12 23:38:23.315: INFO: Waiting for Pod statefulset-2475/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Mar 12 23:38:33.321: INFO: Waiting for StatefulSet statefulset-2475/ss2 to complete update
Mar 12 23:38:33.321: INFO: Waiting for Pod statefulset-2475/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Mar 12 23:38:43.320: INFO: Waiting for StatefulSet statefulset-2475/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Mar 12 23:38:53.321: INFO: Deleting all statefulset in ns statefulset-2475
Mar 12 23:38:53.322: INFO: Scaling statefulset ss2 to 0
Mar 12 23:39:23.341: INFO: Waiting for statefulset status.replicas updated to 0
Mar 12 23:39:23.344: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:39:23.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2475" for this suite.
• [SLOW TEST:151.213 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
should perform rolling updates and roll backs of template modifications [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":11,"skipped":257,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume
should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:39:23.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 12 23:39:23.448: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc" in namespace "downward-api-7529" to be "Succeeded or Failed"
Mar 12 23:39:23.463: INFO: Pod "downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.786688ms
Mar 12 23:39:25.466: INFO: Pod "downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018402383s
STEP: Saw pod success
Mar 12 23:39:25.466: INFO: Pod "downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc" satisfied condition "Succeeded or Failed"
Mar 12 23:39:25.470: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc container client-container:
STEP: delete the pod
Mar 12 23:39:25.518: INFO: Waiting for pod downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc to disappear
Mar 12 23:39:25.524: INFO: Pod downwardapi-volume-a8d4f1d8-14d3-4237-a532-b9130ab82edc no longer exists
[AfterEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:39:25.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7529" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":268,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS
should provide DNS for the cluster [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:39:25.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3909.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3909.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 12 23:39:29.682: INFO: DNS probes using dns-3909/dns-test-8157071b-db80-47e6-855d-a59d7b741496 succeeded
STEP: deleting the pod
[AfterEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:39:29.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3909" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":13,"skipped":292,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods
should be submitted and removed [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:39:29.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Mar 12 23:39:29.831: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:39:42.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9761" for this suite.
• [SLOW TEST:12.731 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
should be submitted and removed [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":336,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Secrets
should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:39:42.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-16c4a896-3487-4d34-bd0a-ec010f9cfdcc
STEP: Creating a pod to test consume secrets
Mar 12 23:39:42.687: INFO: Waiting up to 5m0s for pod "pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62" in namespace "secrets-1252" to be "Succeeded or Failed"
Mar 12 23:39:42.704: INFO: Pod "pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62": Phase="Pending", Reason="", readiness=false. Elapsed: 17.264086ms
Mar 12 23:39:44.708: INFO: Pod "pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020505471s
STEP: Saw pod success
Mar 12 23:39:44.708: INFO: Pod "pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62" satisfied condition "Succeeded or Failed"
Mar 12 23:39:44.711: INFO: Trying to get logs from node latest-worker pod pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62 container secret-volume-test:
STEP: delete the pod
Mar 12 23:39:44.731: INFO: Waiting for pod pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62 to disappear
Mar 12 23:39:44.734: INFO: Pod pod-secrets-5ffee55a-42bc-4343-b1cd-c53ae8716a62 no longer exists
[AfterEach] [sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:39:44.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1252" for this suite.
STEP: Destroying namespace "secret-namespace-7453" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":345,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl replace
should update a single-container pod's image [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:39:44.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Mar 12 23:39:44.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3536'
Mar 12 23:39:44.906: INFO: stderr: ""
Mar 12 23:39:44.906: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Mar 12 23:39:49.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3536 -o json'
Mar 12 23:39:50.068: INFO: stderr: ""
Mar 12 23:39:50.068: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-12T23:39:44Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3536\",\n \"resourceVersion\": \"1208596\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-3536/pods/e2e-test-httpd-pod\",\n \"uid\": \"f649445f-7c18-4a42-9005-cb84f721feb9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-q8m97\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-q8m97\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-q8m97\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T23:39:45Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T23:39:46Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T23:39:46Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-12T23:39:44Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f4ff5dddbc2ee45236064abe99e10f43f8a0620ce6bfb187cdc76ef5caf76f12\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-12T23:39:46Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.16\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.89\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.89\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-12T23:39:45Z\"\n }\n}\n"
STEP: replace the image in the pod
Mar 12 23:39:50.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3536'
Mar 12 23:39:50.262: INFO: stderr: ""
Mar 12 23:39:50.262: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Mar 12 23:39:50.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3536'
Mar 12 23:39:52.515: INFO: stderr: ""
Mar 12 23:39:52.515: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:39:52.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3536" for this suite.
• [SLOW TEST:7.776 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Kubectl replace
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
should update a single-container pod's image [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":16,"skipped":348,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector
should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:39:52.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0312 23:40:02.735251 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 12 23:40:02.735: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:
[AfterEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:40:02.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6699" for this suite.
• [SLOW TEST:10.219 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":17,"skipped":350,"failed":0}
SSSSSS
------------------------------
[k8s.io] Pods
should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:40:02.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 12 23:40:02.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:40:04.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2703" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":356,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events
should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:40:04.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Mar 12 23:40:06.955: INFO: &Pod{ObjectMeta:{send-events-bb7d3bd3-0f91-4d29-8fa4-6c244da5649a events-7021 /api/v1/namespaces/events-7021/pods/send-events-bb7d3bd3-0f91-4d29-8fa4-6c244da5649a 5fbaa80f-f8b9-4763-ba3c-4d3d8550f81b 1208870 0 2020-03-12 23:40:04 +0000 UTC map[name:foo time:929768449] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mvkfp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mvkfp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mvkfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:40:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:40:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:40:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:40:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.2.215,StartTime:2020-03-12 23:40:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 23:40:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://2cad2e1b0356d8c6d326e354ac6ba591f2f0d1f926e4d6d2ab0f744252ad0bba,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.215,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: checking for scheduler event about the pod
Mar 12 23:40:08.959: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Mar 12 23:40:10.964: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:40:10.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7021" for this suite.
• [SLOW TEST:6.145 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":19,"skipped":390,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container
should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:40:10.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-e93f121a-2930-405a-99f4-5562d5348646 in namespace container-probe-9789
Mar 12 23:40:13.068: INFO: Started pod test-webserver-e93f121a-2930-405a-99f4-5562d5348646 in namespace container-probe-9789
STEP: checking the pod's current state and verifying that restartCount is present
Mar 12 23:40:13.072: INFO: Initial restart count of pod test-webserver-e93f121a-2930-405a-99f4-5562d5348646 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:44:13.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9789" for this suite.
• [SLOW TEST:242.622 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":425,"failed":0}
SSS
------------------------------
[k8s.io] Pods
should get a host IP [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:44:13.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
Mar 12 23:44:17.694: INFO: Pod pod-hostip-7fde0035-5166-4375-afa1-476e58e5e069 has hostIP: 172.17.0.16
[AfterEach] [k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:44:17.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7649" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":428,"failed":0}
SSS
------------------------------
[sig-node] ConfigMap
should fail to create ConfigMap with empty key [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:44:17.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-e13c5653-5215-447f-a346-68f8fc7c51ce
[AfterEach] [sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:44:17.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6794" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":22,"skipped":431,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI
should provide container's cpu request [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:44:17.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 12 23:44:17.826: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636" in namespace "projected-5403" to be "Succeeded or Failed"
Mar 12 23:44:17.831: INFO: Pod "downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636": Phase="Pending", Reason="", readiness=false. Elapsed: 5.121527ms
Mar 12 23:44:19.835: INFO: Pod "downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00934816s
STEP: Saw pod success
Mar 12 23:44:19.835: INFO: Pod "downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636" satisfied condition "Succeeded or Failed"
Mar 12 23:44:19.838: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636 container client-container:
STEP: delete the pod
Mar 12 23:44:19.912: INFO: Waiting for pod downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636 to disappear
Mar 12 23:44:19.944: INFO: Pod downwardapi-volume-4f4f3064-66d5-4638-b627-d117b9cef636 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:44:19.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5403" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":434,"failed":0}
S
------------------------------
[sig-storage] ConfigMap
should be consumable from pods in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:44:19.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-2737c147-b2bc-4973-be31-1d8d8751343f
STEP: Creating a pod to test consume configMaps
Mar 12 23:44:20.022: INFO: Waiting up to 5m0s for pod "pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef" in namespace "configmap-2647" to be "Succeeded or Failed"
Mar 12 23:44:20.039: INFO: Pod "pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef": Phase="Pending", Reason="", readiness=false. Elapsed: 16.9418ms
Mar 12 23:44:22.043: INFO: Pod "pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020582045s
STEP: Saw pod success
Mar 12 23:44:22.043: INFO: Pod "pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef" satisfied condition "Succeeded or Failed"
Mar 12 23:44:22.046: INFO: Trying to get logs from node latest-worker pod pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef container configmap-volume-test:
STEP: delete the pod
Mar 12 23:44:22.089: INFO: Waiting for pod pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef to disappear
Mar 12 23:44:22.098: INFO: Pod pod-configmaps-26710d1d-0b56-4772-b932-c5e870dce8ef no longer exists
[AfterEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:44:22.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2647" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":435,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs
should be able to retrieve and filter logs [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:44:22.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
Mar 12 23:44:22.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-308 -- logs-generator --log-lines-total 100 --run-duration 20s'
Mar 12 23:44:22.259: INFO: stderr: ""
Mar 12 23:44:22.259: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
Mar 12 23:44:22.259: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Mar 12 23:44:22.259: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-308" to be "running and ready, or succeeded"
Mar 12 23:44:22.301: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 41.708058ms
Mar 12 23:44:24.304: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.044653364s
Mar 12 23:44:24.304: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Mar 12 23:44:24.304: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Mar 12 23:44:24.304: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-308'
Mar 12 23:44:24.407: INFO: stderr: ""
Mar 12 23:44:24.407: INFO: stdout: "I0312 23:44:23.400167 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/vsg9 473\nI0312 23:44:23.600272 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/csj 393\nI0312 23:44:23.800336 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/2zzs 252\nI0312 23:44:24.000350 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/p6p 559\nI0312 23:44:24.200411 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/n56 415\nI0312 23:44:24.400329 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/rwt 542\n"
STEP: limiting log lines
Mar 12 23:44:24.407: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-308 --tail=1'
Mar 12 23:44:24.497: INFO: stderr: ""
Mar 12 23:44:24.497: INFO: stdout: "I0312 23:44:24.400329 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/rwt 542\n"
Mar 12 23:44:24.497: INFO: got output "I0312 23:44:24.400329 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/rwt 542\n"
STEP: limiting log bytes
Mar 12 23:44:24.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-308 --limit-bytes=1'
Mar 12 23:44:24.577: INFO: stderr: ""
Mar 12 23:44:24.577: INFO: stdout: "I"
Mar 12 23:44:24.577: INFO: got output "I"
STEP: exposing timestamps
Mar 12 23:44:24.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-308 --tail=1 --timestamps'
Mar 12 23:44:24.649: INFO: stderr: ""
Mar 12 23:44:24.649: INFO: stdout: "2020-03-12T23:44:24.600415513Z I0312 23:44:24.600308 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/rm5 505\n"
Mar 12 23:44:24.649: INFO: got output "2020-03-12T23:44:24.600415513Z I0312 23:44:24.600308 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/rm5 505\n"
STEP: restricting to a time range
Mar 12 23:44:27.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-308 --since=1s'
Mar 12 23:44:27.283: INFO: stderr: ""
Mar 12 23:44:27.283: INFO: stdout: "I0312 23:44:26.400323 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/gztv 557\nI0312 23:44:26.600367 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/5j5 331\nI0312 23:44:26.800362 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/rtq 594\nI0312 23:44:27.000384 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/4cbt 248\nI0312 23:44:27.200415 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/slm 567\n"
Mar 12 23:44:27.284: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-308 --since=24h'
Mar 12 23:44:27.377: INFO: stderr: ""
Mar 12 23:44:27.377: INFO: stdout: "I0312 23:44:23.400167 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/vsg9 473\nI0312 23:44:23.600272 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/csj 393\nI0312 23:44:23.800336 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/2zzs 252\nI0312 23:44:24.000350 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/p6p 559\nI0312 23:44:24.200411 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/n56 415\nI0312 23:44:24.400329 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/rwt 542\nI0312 23:44:24.600308 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/rm5 505\nI0312 23:44:24.800319 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/vbzm 208\nI0312 23:44:25.000304 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/wf84 390\nI0312 23:44:25.200329 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/l5vw 540\nI0312 23:44:25.400353 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/cj7 477\nI0312 23:44:25.600353 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/pjpg 228\nI0312 23:44:25.800338 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/rq4 368\nI0312 23:44:26.000372 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/rdz 528\nI0312 23:44:26.200361 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/jbcg 450\nI0312 23:44:26.400323 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/gztv 557\nI0312 23:44:26.600367 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/5j5 331\nI0312 23:44:26.800362 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/rtq 594\nI0312 23:44:27.000384 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/4cbt 248\nI0312 23:44:27.200415 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/slm 567\n"
[AfterEach] Kubectl logs
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
Mar 12 23:44:27.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-308'
Mar 12 23:44:28.962: INFO: stderr: ""
Mar 12 23:44:28.962: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:44:28.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-308" for this suite.
• [SLOW TEST:6.844 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Kubectl logs
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
should be able to retrieve and filter logs [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":25,"skipped":441,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator
Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:44:28.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Mar 12 23:44:29.011: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
Mar 12 23:44:29.795: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Mar 12 23:44:31.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653469, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653469, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653469, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653469, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Mar 12 23:44:34.482: INFO: Waited 605.317964ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:44:34.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-2747" for this suite.
• [SLOW TEST:6.064 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":26,"skipped":455,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:44:35.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9792
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-9792
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9792
Mar 12 23:44:35.086: INFO: Found 0 stateful pods, waiting for 1
Mar 12 23:44:45.106: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Mar 12 23:44:45.110: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 12 23:44:45.357: INFO: stderr: "I0312 23:44:45.257790 521 log.go:172] (0xc00003a6e0) (0xc0006615e0) Create stream\nI0312 23:44:45.257857 521 log.go:172] (0xc00003a6e0) (0xc0006615e0) Stream added, broadcasting: 1\nI0312 23:44:45.260615 521 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0312 23:44:45.260642 521 log.go:172] (0xc00003a6e0) (0xc0005b5680) Create stream\nI0312 23:44:45.260649 521 log.go:172] (0xc00003a6e0) (0xc0005b5680) Stream added, broadcasting: 3\nI0312 23:44:45.261514 521 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0312 23:44:45.261552 521 log.go:172] (0xc00003a6e0) (0xc000661680) Create stream\nI0312 23:44:45.261564 521 log.go:172] (0xc00003a6e0) (0xc000661680) Stream added, broadcasting: 5\nI0312 23:44:45.262536 521 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0312 23:44:45.327177 521 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0312 23:44:45.327196 521 log.go:172] (0xc000661680) (5) Data frame handling\nI0312 23:44:45.327207 521 log.go:172] (0xc000661680) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:44:45.351939 521 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0312 23:44:45.351976 521 log.go:172] (0xc000661680) (5) Data frame handling\nI0312 23:44:45.352002 521 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0312 23:44:45.352047 521 log.go:172] (0xc0005b5680) (3) Data frame handling\nI0312 23:44:45.352065 521 log.go:172] (0xc0005b5680) (3) Data frame sent\nI0312 23:44:45.352072 521 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0312 23:44:45.352078 521 log.go:172] (0xc0005b5680) (3) Data frame handling\nI0312 23:44:45.353741 521 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0312 23:44:45.353765 521 log.go:172] (0xc0006615e0) (1) Data frame handling\nI0312 23:44:45.353788 521 log.go:172] (0xc0006615e0) (1) Data frame sent\nI0312 23:44:45.353806 521 log.go:172] (0xc00003a6e0) (0xc0006615e0) Stream removed, broadcasting: 1\nI0312 23:44:45.353830 521 log.go:172] (0xc00003a6e0) Go away received\nI0312 23:44:45.354266 521 log.go:172] (0xc00003a6e0) (0xc0006615e0) Stream removed, broadcasting: 1\nI0312 23:44:45.354286 521 log.go:172] (0xc00003a6e0) (0xc0005b5680) Stream removed, broadcasting: 3\nI0312 23:44:45.354296 521 log.go:172] (0xc00003a6e0) (0xc000661680) Stream removed, broadcasting: 5\n"
Mar 12 23:44:45.357: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 12 23:44:45.357: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
Mar 12 23:44:45.360: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Mar 12 23:44:55.364: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 12 23:44:55.364: INFO: Waiting for statefulset status.replicas updated to 0
Mar 12 23:44:55.384: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 12 23:44:55.384: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }]
Mar 12 23:44:55.384: INFO:
Mar 12 23:44:55.384: INFO: StatefulSet ss has not reached scale 3, at 1
Mar 12 23:44:56.388: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987669235s
Mar 12 23:44:57.392: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984261796s
Mar 12 23:44:58.395: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98024802s
Mar 12 23:44:59.399: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976693445s
Mar 12 23:45:00.403: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972749706s
Mar 12 23:45:01.407: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.969188675s
Mar 12 23:45:02.418: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.964972997s
Mar 12 23:45:03.422: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.954289541s
Mar 12 23:45:04.426: INFO: Verifying statefulset ss doesn't scale past 3 for another 950.292309ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9792
Mar 12 23:45:05.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 12 23:45:05.629: INFO: stderr: "I0312 23:45:05.563865 543 log.go:172] (0xc0003c7d90) (0xc00069d680) Create stream\nI0312 23:45:05.563911 543 log.go:172] (0xc0003c7d90) (0xc00069d680) Stream added, broadcasting: 1\nI0312 23:45:05.566172 543 log.go:172] (0xc0003c7d90) Reply frame received for 1\nI0312 23:45:05.566205 543 log.go:172] (0xc0003c7d90) (0xc0003d0b40) Create stream\nI0312 23:45:05.566214 543 log.go:172] (0xc0003c7d90) (0xc0003d0b40) Stream added, broadcasting: 3\nI0312 23:45:05.567045 543 log.go:172] (0xc0003c7d90) Reply frame received for 3\nI0312 23:45:05.567078 543 log.go:172] (0xc0003c7d90) (0xc00069d720) Create stream\nI0312 23:45:05.567088 543 log.go:172] (0xc0003c7d90) (0xc00069d720) Stream added, broadcasting: 5\nI0312 23:45:05.567976 543 log.go:172] (0xc0003c7d90) Reply frame received for 5\nI0312 23:45:05.624843 543 log.go:172] (0xc0003c7d90) Data frame received for 3\nI0312 23:45:05.624885 543 log.go:172] (0xc0003d0b40) (3) Data frame handling\nI0312 23:45:05.624899 543 log.go:172] (0xc0003d0b40) (3) Data frame sent\nI0312 23:45:05.624909 543 log.go:172] (0xc0003c7d90) Data frame received for 3\nI0312 23:45:05.624918 543 log.go:172] (0xc0003d0b40) (3) Data frame handling\nI0312 23:45:05.624949 543 log.go:172] (0xc0003c7d90) Data frame received for 5\nI0312 23:45:05.624957 543 log.go:172] (0xc00069d720) (5) Data frame handling\nI0312 23:45:05.624976 543 log.go:172] (0xc00069d720) (5) Data frame sent\nI0312 23:45:05.624986 543 log.go:172] (0xc0003c7d90) Data frame received for 5\nI0312 23:45:05.625003 543 log.go:172] (0xc00069d720) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0312 23:45:05.626050 543 log.go:172] (0xc0003c7d90) Data frame received for 1\nI0312 23:45:05.626068 543 log.go:172] (0xc00069d680) (1) Data frame handling\nI0312 23:45:05.626077 543 log.go:172] (0xc00069d680) (1) Data frame sent\nI0312 23:45:05.626088 543 log.go:172] (0xc0003c7d90) (0xc00069d680) Stream removed, broadcasting: 1\nI0312 23:45:05.626384 543 log.go:172] (0xc0003c7d90) (0xc00069d680) Stream removed, broadcasting: 1\nI0312 23:45:05.626400 543 log.go:172] (0xc0003c7d90) (0xc0003d0b40) Stream removed, broadcasting: 3\nI0312 23:45:05.626406 543 log.go:172] (0xc0003c7d90) (0xc00069d720) Stream removed, broadcasting: 5\n"
Mar 12 23:45:05.629: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 12 23:45:05.629: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
Mar 12 23:45:05.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 12 23:45:05.821: INFO: stderr: "I0312 23:45:05.748542 564 log.go:172] (0xc0008366e0) (0xc0007ee000) Create stream\nI0312 23:45:05.748582 564 log.go:172] (0xc0008366e0) (0xc0007ee000) Stream added, broadcasting: 1\nI0312 23:45:05.750786 564 log.go:172] (0xc0008366e0) Reply frame received for 1\nI0312 23:45:05.750812 564 log.go:172] (0xc0008366e0) (0xc0006e1360) Create stream\nI0312 23:45:05.750821 564 log.go:172] (0xc0008366e0) (0xc0006e1360) Stream added, broadcasting: 3\nI0312 23:45:05.751646 564 log.go:172] (0xc0008366e0) Reply frame received for 3\nI0312 23:45:05.751689 564 log.go:172] (0xc0008366e0) (0xc0006e1540) Create stream\nI0312 23:45:05.751696 564 log.go:172] (0xc0008366e0) (0xc0006e1540) Stream added, broadcasting: 5\nI0312 23:45:05.752416 564 log.go:172] (0xc0008366e0) Reply frame received for 5\nI0312 23:45:05.816285 564 log.go:172] (0xc0008366e0) Data frame received for 5\nI0312 23:45:05.816318 564 log.go:172] (0xc0006e1540) (5) Data frame handling\nI0312 23:45:05.816331 564 log.go:172] (0xc0006e1540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0312 23:45:05.816348 564 log.go:172] (0xc0008366e0) Data frame received for 3\nI0312 23:45:05.816356 564 log.go:172] (0xc0006e1360) (3) Data frame handling\nI0312 23:45:05.816364 564 log.go:172] (0xc0006e1360) (3) Data frame sent\nI0312 23:45:05.816381 564 log.go:172] (0xc0008366e0) Data frame received for 3\nI0312 23:45:05.816388 564 log.go:172] (0xc0006e1360) (3) Data frame handling\nI0312 23:45:05.816449 564 log.go:172] (0xc0008366e0) Data frame received for 5\nI0312 23:45:05.816462 564 log.go:172] (0xc0006e1540) (5) Data frame handling\nI0312 23:45:05.817698 564 log.go:172] (0xc0008366e0) Data frame received for 1\nI0312 23:45:05.817718 564 log.go:172] (0xc0007ee000) (1) Data frame handling\nI0312 23:45:05.817727 564 log.go:172] (0xc0007ee000) (1) Data frame sent\nI0312 23:45:05.817736 564 log.go:172] (0xc0008366e0) (0xc0007ee000) Stream removed, broadcasting: 1\nI0312 23:45:05.817888 564 log.go:172] (0xc0008366e0) Go away received\nI0312 23:45:05.818006 564 log.go:172] (0xc0008366e0) (0xc0007ee000) Stream removed, broadcasting: 1\nI0312 23:45:05.818022 564 log.go:172] (0xc0008366e0) (0xc0006e1360) Stream removed, broadcasting: 3\nI0312 23:45:05.818028 564 log.go:172] (0xc0008366e0) (0xc0006e1540) Stream removed, broadcasting: 5\n"
Mar 12 23:45:05.821: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 12 23:45:05.821: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
Mar 12 23:45:05.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Mar 12 23:45:05.971: INFO: stderr: "I0312 23:45:05.919982 584 log.go:172] (0xc000ba9290) (0xc0008f4a00) Create stream\nI0312 23:45:05.920014 584 log.go:172] (0xc000ba9290) (0xc0008f4a00) Stream added, broadcasting: 1\nI0312 23:45:05.922742 584 log.go:172] (0xc000ba9290) Reply frame received for 1\nI0312 23:45:05.922767 584 log.go:172] (0xc000ba9290) (0xc0007dd720) Create stream\nI0312 23:45:05.922773 584 log.go:172] (0xc000ba9290) (0xc0007dd720) Stream added, broadcasting: 3\nI0312 23:45:05.923345 584 log.go:172] (0xc000ba9290) Reply frame received for 3\nI0312 23:45:05.923365 584 log.go:172] (0xc000ba9290) (0xc000546b40) Create stream\nI0312 23:45:05.923371 584 log.go:172] (0xc000ba9290) (0xc000546b40) Stream added, broadcasting: 5\nI0312 23:45:05.923930 584 log.go:172] (0xc000ba9290) Reply frame received for 5\nI0312 23:45:05.967221 584 log.go:172] (0xc000ba9290) Data frame received for 3\nI0312 23:45:05.967256 584 log.go:172] (0xc0007dd720) (3) Data frame handling\nI0312 23:45:05.967265 584 log.go:172] (0xc0007dd720) (3) Data frame sent\nI0312 23:45:05.967271 584 log.go:172] (0xc000ba9290) Data frame received for 3\nI0312 23:45:05.967278 584 log.go:172] (0xc0007dd720) (3) Data frame handling\nI0312 23:45:05.967287 584 log.go:172] (0xc000ba9290) Data frame received for 5\nI0312 23:45:05.967293 584 log.go:172] (0xc000546b40) (5) Data frame handling\nI0312 23:45:05.967300 584 log.go:172] (0xc000546b40) (5) Data frame sent\nI0312 23:45:05.967306 584 log.go:172] (0xc000ba9290) Data frame received for 5\nI0312 23:45:05.967312 584 log.go:172] (0xc000546b40) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0312 23:45:05.968565 584 log.go:172] (0xc000ba9290) Data frame received for 1\nI0312 23:45:05.968595 584 log.go:172] (0xc0008f4a00) (1) Data frame handling\nI0312 23:45:05.968605 584 log.go:172] (0xc0008f4a00) (1) Data frame sent\nI0312 23:45:05.968618 584 log.go:172] (0xc000ba9290) (0xc0008f4a00) Stream removed, broadcasting: 1\nI0312 23:45:05.968656 584 log.go:172] (0xc000ba9290) Go away received\nI0312 23:45:05.968851 584 log.go:172] (0xc000ba9290) (0xc0008f4a00) Stream removed, broadcasting: 1\nI0312 23:45:05.968863 584 log.go:172] (0xc000ba9290) (0xc0007dd720) Stream removed, broadcasting: 3\nI0312 23:45:05.968870 584 log.go:172] (0xc000ba9290) (0xc000546b40) Stream removed, broadcasting: 5\n"
Mar 12 23:45:05.971: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Mar 12 23:45:05.971: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'
Mar 12 23:45:05.975: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Mar 12 23:45:05.975: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Mar 12 23:45:05.975: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Mar 12 23:45:05.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 12 23:45:06.144: INFO: stderr: "I0312 23:45:06.065303 606 log.go:172] (0xc0009f08f0) (0xc000a241e0) Create stream\nI0312 23:45:06.065336 606 log.go:172] (0xc0009f08f0) (0xc000a241e0) Stream added, broadcasting: 1\nI0312 23:45:06.067076 606 log.go:172] (0xc0009f08f0) Reply frame received for 1\nI0312 23:45:06.067105 606 log.go:172] (0xc0009f08f0) (0xc00080f220) Create stream\nI0312 23:45:06.067120 606 log.go:172] (0xc0009f08f0) (0xc00080f220) Stream added, broadcasting: 3\nI0312 23:45:06.067591 606 log.go:172] (0xc0009f08f0) Reply frame received for 3\nI0312 23:45:06.067607 606 log.go:172] (0xc0009f08f0) (0xc00080f400) Create stream\nI0312 23:45:06.067612 606 log.go:172] (0xc0009f08f0) (0xc00080f400) Stream added, broadcasting: 5\nI0312 23:45:06.068073 606 log.go:172] (0xc0009f08f0) Reply frame received for 5\nI0312 23:45:06.140140 606 log.go:172] (0xc0009f08f0) Data frame received for 3\nI0312 23:45:06.140160 606 log.go:172] (0xc00080f220) (3) Data frame handling\nI0312 23:45:06.140166 606 log.go:172] (0xc00080f220) (3) Data frame sent\nI0312 23:45:06.140171 606 log.go:172] (0xc0009f08f0) Data frame received for 3\nI0312 23:45:06.140174 606 log.go:172] (0xc00080f220) (3) Data frame handling\nI0312 23:45:06.140192 606 log.go:172] (0xc0009f08f0) Data frame received for 5\nI0312 23:45:06.140199 606 log.go:172] (0xc00080f400) (5) Data frame handling\nI0312 23:45:06.140206 606 log.go:172] (0xc00080f400) (5) Data frame sent\nI0312 23:45:06.140212 606 log.go:172] (0xc0009f08f0) Data frame received for 5\nI0312 23:45:06.140215 606 log.go:172] (0xc00080f400) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:45:06.140857 606 log.go:172] (0xc0009f08f0) Data frame received for 1\nI0312 23:45:06.140875 606 log.go:172] (0xc000a241e0) (1) Data frame handling\nI0312 23:45:06.140883 606 log.go:172] (0xc000a241e0) (1) Data frame sent\nI0312 23:45:06.140897 606 log.go:172] (0xc0009f08f0) (0xc000a241e0) Stream removed, broadcasting: 1\nI0312 23:45:06.140908 606 log.go:172] (0xc0009f08f0) Go away received\nI0312 23:45:06.141161 606 log.go:172] (0xc0009f08f0) (0xc000a241e0) Stream removed, broadcasting: 1\nI0312 23:45:06.141174 606 log.go:172] (0xc0009f08f0) (0xc00080f220) Stream removed, broadcasting: 3\nI0312 23:45:06.141179 606 log.go:172] (0xc0009f08f0) (0xc00080f400) Stream removed, broadcasting: 5\n"
Mar 12 23:45:06.144: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 12 23:45:06.144: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
Mar 12 23:45:06.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 12 23:45:06.330: INFO: stderr: "I0312 23:45:06.234296 626 log.go:172] (0xc0009d62c0) (0xc000a5e640) Create stream\nI0312 23:45:06.234329 626 log.go:172] (0xc0009d62c0) (0xc000a5e640) Stream added, broadcasting: 1\nI0312 23:45:06.236964 626 log.go:172] (0xc0009d62c0) Reply frame received for 1\nI0312 23:45:06.236986 626 log.go:172] (0xc0009d62c0) (0xc000209540) Create stream\nI0312 23:45:06.236993 626 log.go:172] (0xc0009d62c0) (0xc000209540) Stream added, broadcasting: 3\nI0312 23:45:06.237584 626 log.go:172] (0xc0009d62c0) Reply frame received for 3\nI0312 23:45:06.237601 626 log.go:172] (0xc0009d62c0) (0xc0006843c0) Create stream\nI0312 23:45:06.237608 626 log.go:172] (0xc0009d62c0) (0xc0006843c0) Stream added, broadcasting: 5\nI0312 23:45:06.238151 626 log.go:172] (0xc0009d62c0) Reply frame received for 5\nI0312 23:45:06.306304 626 log.go:172] (0xc0009d62c0) Data frame received for 5\nI0312 23:45:06.306323 626 log.go:172] (0xc0006843c0) (5) Data frame handling\nI0312 23:45:06.306337 626 log.go:172] (0xc0006843c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:45:06.326735 626 log.go:172] (0xc0009d62c0) Data frame received for 3\nI0312 23:45:06.326753 626 log.go:172] (0xc000209540) (3) Data frame handling\nI0312 23:45:06.326766 626 log.go:172] (0xc000209540) (3) Data frame sent\nI0312 23:45:06.326772 626 log.go:172] (0xc0009d62c0) Data frame received for 3\nI0312 23:45:06.326785 626 log.go:172] (0xc000209540) (3) Data frame handling\nI0312 23:45:06.326840 626 log.go:172] (0xc0009d62c0) Data frame received for 5\nI0312 23:45:06.326852 626 log.go:172] (0xc0006843c0) (5) Data frame handling\nI0312 23:45:06.327697 626 log.go:172] (0xc0009d62c0) Data frame received for 1\nI0312 23:45:06.327713 626 log.go:172] (0xc000a5e640) (1) Data frame handling\nI0312 23:45:06.327723 626 log.go:172] (0xc000a5e640) (1) Data frame sent\nI0312 23:45:06.327756 626 log.go:172] (0xc0009d62c0) (0xc000a5e640) Stream removed, broadcasting: 1\nI0312 23:45:06.327788 626 log.go:172] (0xc0009d62c0) Go away received\nI0312 23:45:06.327975 626 log.go:172] (0xc0009d62c0) (0xc000a5e640) Stream removed, broadcasting: 1\nI0312 23:45:06.327986 626 log.go:172] (0xc0009d62c0) (0xc000209540) Stream removed, broadcasting: 3\nI0312 23:45:06.327993 626 log.go:172] (0xc0009d62c0) (0xc0006843c0) Stream removed, broadcasting: 5\n"
Mar 12 23:45:06.330: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 12 23:45:06.330: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
Mar 12 23:45:06.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9792 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Mar 12 23:45:06.529: INFO: stderr: "I0312 23:45:06.415303 646 log.go:172] (0xc0006022c0) (0xc0006f1540) Create stream\nI0312 23:45:06.415337 646 log.go:172] (0xc0006022c0) (0xc0006f1540) Stream added, broadcasting: 1\nI0312 23:45:06.417171 646 log.go:172] (0xc0006022c0) Reply frame received for 1\nI0312 23:45:06.417189 646 log.go:172] (0xc0006022c0) (0xc0004ccb40) Create stream\nI0312 23:45:06.417195 646 log.go:172] (0xc0006022c0) (0xc0004ccb40) Stream added, broadcasting: 3\nI0312 23:45:06.418287 646 log.go:172] (0xc0006022c0) Reply frame received for 3\nI0312 23:45:06.418311 646 log.go:172] (0xc0006022c0) (0xc0008e6000) Create stream\nI0312 23:45:06.418321 646 log.go:172] (0xc0006022c0) (0xc0008e6000) Stream added, broadcasting: 5\nI0312 23:45:06.418813 646 log.go:172] (0xc0006022c0) Reply frame received for 5\nI0312 23:45:06.480410 646 log.go:172] (0xc0006022c0) Data frame received for 5\nI0312 23:45:06.480431 646 log.go:172] (0xc0008e6000) (5) Data frame handling\nI0312 23:45:06.480438 646 log.go:172] (0xc0008e6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0312 23:45:06.524550 646 log.go:172] (0xc0006022c0) Data frame received for 3\nI0312 23:45:06.524578 646 log.go:172] (0xc0004ccb40) (3) Data frame handling\nI0312 23:45:06.524588 646 log.go:172] (0xc0004ccb40) (3) Data frame sent\nI0312 23:45:06.524816 646 log.go:172] (0xc0006022c0) Data frame received for 5\nI0312 23:45:06.524837 646 log.go:172] (0xc0008e6000) (5) Data frame handling\nI0312 23:45:06.524851 646 log.go:172] (0xc0006022c0) Data frame received for 3\nI0312 23:45:06.524859 646 log.go:172] (0xc0004ccb40) (3) Data frame handling\nI0312 23:45:06.525965 646 log.go:172] (0xc0006022c0) Data frame received for 1\nI0312 23:45:06.525979 646 log.go:172] (0xc0006f1540) (1) Data frame handling\nI0312 23:45:06.525988 646 log.go:172] (0xc0006f1540) (1) Data frame sent\nI0312 23:45:06.526000 646 log.go:172] (0xc0006022c0) (0xc0006f1540) Stream removed, broadcasting: 1\nI0312 23:45:06.526013 646 log.go:172] (0xc0006022c0) Go away received\nI0312 23:45:06.526309 646 log.go:172] (0xc0006022c0) (0xc0006f1540) Stream removed, broadcasting: 1\nI0312 23:45:06.526323 646 log.go:172] (0xc0006022c0) (0xc0004ccb40) Stream removed, broadcasting: 3\nI0312 23:45:06.526330 646 log.go:172] (0xc0006022c0) (0xc0008e6000) Stream removed, broadcasting: 5\n"
Mar 12 23:45:06.529: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Mar 12 23:45:06.529: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'
Mar 12 23:45:06.529: INFO: Waiting for statefulset status.replicas updated to 0
Mar 12 23:45:06.532: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Mar 12 23:45:16.546: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Mar 12 23:45:16.546: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Mar 12 23:45:16.546: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Mar 12 23:45:16.556: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 12 23:45:16.556: INFO: ss-0 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }]
Mar 12 23:45:16.556: INFO: ss-1 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }]
Mar 12 23:45:16.556: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }]
Mar 12 23:45:16.556: INFO:
Mar 12 23:45:16.556: INFO: StatefulSet ss has not reached scale 0, at 3
Mar 12 23:45:17.559: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 12 23:45:17.559: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }]
Mar 12 23:45:17.560: INFO: ss-1 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }]
Mar 12 23:45:17.560: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }]
Mar 12 23:45:17.560: INFO:
Mar 12 23:45:17.560: INFO: StatefulSet ss has not reached scale 0, at 3
Mar 12 23:45:18.563: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 12 23:45:18.563: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }]
Mar 12 23:45:18.563: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }]
Mar 12 23:45:18.563: INFO:
Mar 12 23:45:18.563: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 12 23:45:19.567: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 12 23:45:19.567: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }]
Mar 12 23:45:19.567: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }]
Mar 12 23:45:19.567: INFO:
Mar 12 23:45:19.567: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 12 23:45:20.571: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 12 23:45:20.571: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }]
Mar 12 23:45:20.571: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }]
Mar 12 23:45:20.571: INFO:
Mar 12 23:45:20.571: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 12 23:45:21.575: INFO: POD NODE PHASE GRACE CONDITIONS
Mar 12 23:45:21.575: INFO: ss-0 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:35 +0000 UTC }]
Mar 12 23:45:21.575: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:45:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-12 23:44:55 +0000 UTC }]
Mar 12 23:45:21.575: INFO:
Mar 12 23:45:21.575: INFO: StatefulSet ss has not reached scale 0, at 2
Mar 12 23:45:22.585: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.974627415s
Mar 12 23:45:23.589: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.964645872s
Mar 12 23:45:24.592: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.961183358s
Mar 12 23:45:25.597: INFO: Verifying statefulset ss doesn't scale past 0 for another 957.844643ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9792
Mar 12 23:45:26.601: INFO: Scaling statefulset ss to 0
Mar 12 23:45:26.609: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Mar 12 23:45:26.612: INFO: Deleting all statefulset in ns statefulset-9792
Mar 12 23:45:26.614: INFO: Scaling statefulset ss to 0
Mar 12 23:45:26.623: INFO: Waiting for statefulset status.replicas updated to 0
Mar 12 23:45:26.625: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:45:26.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9792" for this suite.
• [SLOW TEST:51.628 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":27,"skipped":473,"failed":0}
SS
------------------------------
[k8s.io] Docker Containers
should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:45:26.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
Mar 12 23:45:26.714: INFO: Waiting up to 5m0s for pod "client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada" in namespace "containers-5468" to be "Succeeded or Failed"
Mar 12 23:45:26.731: INFO: Pod "client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada": Phase="Pending", Reason="", readiness=false. Elapsed: 17.150895ms
Mar 12 23:45:28.734: INFO: Pod "client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020624799s
STEP: Saw pod success
Mar 12 23:45:28.735: INFO: Pod "client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada" satisfied condition "Succeeded or Failed"
Mar 12 23:45:28.737: INFO: Trying to get logs from node latest-worker pod client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada container test-container:
STEP: delete the pod
Mar 12 23:45:28.785: INFO: Waiting for pod client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada to disappear
Mar 12 23:45:28.788: INFO: Pod client-containers-a217e58b-7fa9-435b-a8e1-fccc76564ada no longer exists
[AfterEach] [k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:45:28.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5468" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":475,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes
should support subpaths with downward pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:45:28.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-nmxm
STEP: Creating a pod to test atomic-volume-subpath
Mar 12 23:45:28.868: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-nmxm" in namespace "subpath-311" to be "Succeeded or Failed"
Mar 12 23:45:28.872: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150406ms
Mar 12 23:45:30.876: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 2.007921197s
Mar 12 23:45:32.880: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 4.012124605s
Mar 12 23:45:34.884: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 6.015578449s
Mar 12 23:45:36.888: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 8.019354308s
Mar 12 23:45:38.892: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 10.023209612s
Mar 12 23:45:40.896: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 12.027542604s
Mar 12 23:45:42.899: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 14.030804654s
Mar 12 23:45:44.907: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 16.038412205s
Mar 12 23:45:46.911: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 18.042474642s
Mar 12 23:45:48.915: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 20.046266085s
Mar 12 23:45:50.918: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Running", Reason="", readiness=true. Elapsed: 22.049779359s
Mar 12 23:45:52.922: INFO: Pod "pod-subpath-test-downwardapi-nmxm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053728026s
STEP: Saw pod success
Mar 12 23:45:52.922: INFO: Pod "pod-subpath-test-downwardapi-nmxm" satisfied condition "Succeeded or Failed"
Mar 12 23:45:52.926: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-nmxm container test-container-subpath-downwardapi-nmxm:
STEP: delete the pod
Mar 12 23:45:52.959: INFO: Waiting for pod pod-subpath-test-downwardapi-nmxm to disappear
Mar 12 23:45:52.983: INFO: Pod pod-subpath-test-downwardapi-nmxm no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-nmxm
Mar 12 23:45:52.983: INFO: Deleting pod "pod-subpath-test-downwardapi-nmxm" in namespace "subpath-311"
[AfterEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:45:52.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-311" for this suite.
• [SLOW TEST:24.192 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
should support subpaths with downward pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":29,"skipped":483,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:45:52.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 12 23:45:53.749: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 12 23:45:55.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653553, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653553, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653553, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653553, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 12 23:45:58.771: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:45:58.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6585" for this suite.
STEP: Destroying namespace "webhook-6585-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
• [SLOW TEST:6.047 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":30,"skipped":494,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
works for CRD without validation schema [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:45:59.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 12 23:45:59.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Mar 12 23:46:00.872: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5637 create -f -'
Mar 12 23:46:02.865: INFO: stderr: ""
Mar 12 23:46:02.865: INFO: stdout: "e2e-test-crd-publish-openapi-6552-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Mar 12 23:46:02.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5637 delete e2e-test-crd-publish-openapi-6552-crds test-cr'
Mar 12 23:46:02.971: INFO: stderr: ""
Mar 12 23:46:02.971: INFO: stdout: "e2e-test-crd-publish-openapi-6552-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Mar 12 23:46:02.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5637 apply -f -'
Mar 12 23:46:03.189: INFO: stderr: ""
Mar 12 23:46:03.189: INFO: stdout: "e2e-test-crd-publish-openapi-6552-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Mar 12 23:46:03.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5637 delete e2e-test-crd-publish-openapi-6552-crds test-cr'
Mar 12 23:46:03.281: INFO: stderr: ""
Mar 12 23:46:03.281: INFO: stdout: "e2e-test-crd-publish-openapi-6552-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Mar 12 23:46:03.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6552-crds'
Mar 12 23:46:03.510: INFO: stderr: ""
Mar 12 23:46:03.510: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6552-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:46:06.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5637" for this suite.
• [SLOW TEST:7.264 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
works for CRD without validation schema [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":31,"skipped":506,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases
should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:46:06.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:46:08.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4197" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":538,"failed":0}
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance]
should invoke init containers on a RestartNever pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:46:08.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Mar 12 23:46:08.461: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:46:12.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9699" for this suite.
•{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":33,"skipped":541,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Secrets
should be consumable via the environment [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:46:12.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating secret secrets-7095/secret-test-ad93981a-cc8b-49c4-96e4-ef69bf8d2594
STEP: Creating a pod to test consume secrets
Mar 12 23:46:12.248: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06" in namespace "secrets-7095" to be "Succeeded or Failed"
Mar 12 23:46:12.253: INFO: Pod "pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417361ms
Mar 12 23:46:14.256: INFO: Pod "pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007639792s
STEP: Saw pod success
Mar 12 23:46:14.256: INFO: Pod "pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06" satisfied condition "Succeeded or Failed"
Mar 12 23:46:14.258: INFO: Trying to get logs from node latest-worker pod pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06 container env-test:
STEP: delete the pod
Mar 12 23:46:14.278: INFO: Waiting for pod pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06 to disappear
Mar 12 23:46:14.281: INFO: Pod pod-configmaps-ea3086aa-b2bc-498f-a29c-0e77a2946a06 no longer exists
[AfterEach] [sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:46:14.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7095" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":547,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret
should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:46:14.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-a403256f-ce82-4022-b889-8dbda7444541
STEP: Creating a pod to test consume secrets
Mar 12 23:46:14.342: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4" in namespace "projected-8064" to be "Succeeded or Failed"
Mar 12 23:46:14.370: INFO: Pod "pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.614687ms
Mar 12 23:46:16.373: INFO: Pod "pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.031411378s
STEP: Saw pod success
Mar 12 23:46:16.373: INFO: Pod "pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4" satisfied condition "Succeeded or Failed"
Mar 12 23:46:16.375: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4 container projected-secret-volume-test:
STEP: delete the pod
Mar 12 23:46:16.395: INFO: Waiting for pod pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4 to disappear
Mar 12 23:46:16.400: INFO: Pod pod-projected-secrets-e931fab1-f478-4eb8-8f0f-9b92c1ae3ca4 no longer exists
[AfterEach] [sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:46:16.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8064" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":561,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota
should create a ResourceQuota and capture the life of a service. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:46:16.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:46:27.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1020" for this suite.
• [SLOW TEST:11.127 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should create a ResourceQuota and capture the life of a service. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":36,"skipped":564,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl api-versions
should check if v1 is in available api versions [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:46:27.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Mar 12 23:46:27.619: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config api-versions'
Mar 12 23:46:27.795: INFO: stderr: ""
Mar 12 23:46:27.795: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:46:27.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4656" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":37,"skipped":565,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition
getting/updating/patching custom resource definition status sub-resource works [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:46:27.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 12 23:46:27.847: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:46:28.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8388" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":38,"skipped":577,"failed":0}
SSSSS
------------------------------
[sig-scheduling] LimitRange
should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:46:28.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
Mar 12 23:46:28.507: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
Mar 12 23:46:28.516: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}]
Mar 12 23:46:28.516: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
Mar 12 23:46:28.522: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}]
Mar 12 23:46:28.522: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
Mar 12 23:46:28.549: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
Mar 12 23:46:28.549: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
Mar 12 23:46:35.610: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:46:35.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-6166" for this suite.
• [SLOW TEST:7.244 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":39,"skipped":582,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap
should be consumable via the environment [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:46:35.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-7809/configmap-test-5430e95e-f61e-481b-ba57-cfae8baec8b1
STEP: Creating a pod to test consume configMaps
Mar 12 23:46:35.750: INFO: Waiting up to 5m0s for pod "pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a" in namespace "configmap-7809" to be "Succeeded or Failed"
Mar 12 23:46:35.754: INFO: Pod "pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178435ms
Mar 12 23:46:37.757: INFO: Pod "pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006723464s
STEP: Saw pod success
Mar 12 23:46:37.757: INFO: Pod "pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a" satisfied condition "Succeeded or Failed"
Mar 12 23:46:37.758: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a container env-test:
STEP: delete the pod
Mar 12 23:46:37.793: INFO: Waiting for pod pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a to disappear
Mar 12 23:46:37.815: INFO: Pod pod-configmaps-2fe9a94b-ab10-49f9-9a62-00b790d1822a no longer exists
[AfterEach] [sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:46:37.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7809" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":595,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota
should create a ResourceQuota and capture the life of a pod. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:46:37.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:46:50.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8054" for this suite.
• [SLOW TEST:13.128 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should create a ResourceQuota and capture the life of a pod. [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":41,"skipped":612,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
should have a working scale subresource [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:46:50.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-2297
[It] should have a working scale subresource [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-2297
Mar 12 23:46:51.048: INFO: Found 0 stateful pods, waiting for 1
Mar 12 23:47:01.052: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Mar 12 23:47:01.072: INFO: Deleting all statefulset in ns statefulset-2297
Mar 12 23:47:01.079: INFO: Scaling statefulset ss to 0
Mar 12 23:47:21.125: INFO: Waiting for statefulset status.replicas updated to 0
Mar 12 23:47:21.127: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:47:21.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2297" for this suite.
• [SLOW TEST:30.201 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
[k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
should have a working scale subresource [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":42,"skipped":633,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector
should not be blocked by dependency circle [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:47:21.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 12 23:47:21.285: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"96e77570-d7ef-4309-8184-abd17336785c", Controller:(*bool)(0xc0039f373a), BlockOwnerDeletion:(*bool)(0xc0039f373b)}}
Mar 12 23:47:21.328: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"dd8da7c1-7baa-4a13-8b57-914ffbf5dedf", Controller:(*bool)(0xc0017c36ba), BlockOwnerDeletion:(*bool)(0xc0017c36bb)}}
Mar 12 23:47:21.333: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"7e08c4a3-84e8-4a98-91af-f2d9d3c90486", Controller:(*bool)(0xc0033d1f42), BlockOwnerDeletion:(*bool)(0xc0033d1f43)}}
[AfterEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:47:26.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6327" for this suite.
• [SLOW TEST:5.239 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should not be blocked by dependency circle [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":43,"skipped":667,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes
should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:47:26.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Mar 12 23:47:26.476: INFO: Waiting up to 5m0s for pod "pod-ba90e0cc-470f-441c-9336-22053a3e2426" in namespace "emptydir-9102" to be "Succeeded or Failed"
Mar 12 23:47:26.521: INFO: Pod "pod-ba90e0cc-470f-441c-9336-22053a3e2426": Phase="Pending", Reason="", readiness=false. Elapsed: 45.478819ms
Mar 12 23:47:28.525: INFO: Pod "pod-ba90e0cc-470f-441c-9336-22053a3e2426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.049022277s
STEP: Saw pod success
Mar 12 23:47:28.525: INFO: Pod "pod-ba90e0cc-470f-441c-9336-22053a3e2426" satisfied condition "Succeeded or Failed"
Mar 12 23:47:28.527: INFO: Trying to get logs from node latest-worker pod pod-ba90e0cc-470f-441c-9336-22053a3e2426 container test-container:
STEP: delete the pod
Mar 12 23:47:28.565: INFO: Waiting for pod pod-ba90e0cc-470f-441c-9336-22053a3e2426 to disappear
Mar 12 23:47:28.569: INFO: Pod pod-ba90e0cc-470f-441c-9336-22053a3e2426 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:47:28.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9102" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":688,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS
should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:47:28.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9489.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9489.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9489.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done
STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9489.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9489.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9489.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done
STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 12 23:47:32.694: INFO: DNS probes using dns-9489/dns-test-64c5b888-395a-40cd-93ed-23882267c5dd succeeded
STEP: deleting the pod
[AfterEach] [sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:47:32.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9489" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":45,"skipped":730,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
patching/updating a mutating webhook should work [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:47:32.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 12 23:47:33.381: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 12 23:47:35.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653653, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653653, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653653, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653653, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 12 23:47:38.410: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:47:38.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4883" for this suite.
STEP: Destroying namespace "webhook-4883-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
• [SLOW TEST:5.844 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
patching/updating a mutating webhook should work [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":46,"skipped":758,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap
optional updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:47:38.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-ca6949b1-3030-4291-b0d0-c44ee4632c6d
STEP: Creating configMap with name cm-test-opt-upd-cd786e79-931b-49a1-8f7b-b4761127c0df
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-ca6949b1-3030-4291-b0d0-c44ee4632c6d
STEP: Updating configmap cm-test-opt-upd-cd786e79-931b-49a1-8f7b-b4761127c0df
STEP: Creating configMap with name cm-test-opt-create-6e571717-9f94-460d-8e5b-269e2b253e73
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:48:47.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1875" for this suite.
• [SLOW TEST:68.467 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
optional updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":766,"failed":0}
S
------------------------------
[sig-node] Downward API
should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:48:47.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Mar 12 23:48:47.113: INFO: Waiting up to 5m0s for pod "downward-api-2e3b918f-f427-4e46-b387-27095f7ff827" in namespace "downward-api-2217" to be "Succeeded or Failed"
Mar 12 23:48:47.134: INFO: Pod "downward-api-2e3b918f-f427-4e46-b387-27095f7ff827": Phase="Pending", Reason="", readiness=false. Elapsed: 21.780117ms
Mar 12 23:48:49.139: INFO: Pod "downward-api-2e3b918f-f427-4e46-b387-27095f7ff827": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025909412s
STEP: Saw pod success
Mar 12 23:48:49.139: INFO: Pod "downward-api-2e3b918f-f427-4e46-b387-27095f7ff827" satisfied condition "Succeeded or Failed"
Mar 12 23:48:49.141: INFO: Trying to get logs from node latest-worker2 pod downward-api-2e3b918f-f427-4e46-b387-27095f7ff827 container dapi-container:
STEP: delete the pod
Mar 12 23:48:49.175: INFO: Waiting for pod downward-api-2e3b918f-f427-4e46-b387-27095f7ff827 to disappear
Mar 12 23:48:49.179: INFO: Pod downward-api-2e3b918f-f427-4e46-b387-27095f7ff827 no longer exists
[AfterEach] [sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:48:49.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2217" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":767,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes
should support subpaths with configmap pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:48:49.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-ljf2
STEP: Creating a pod to test atomic-volume-subpath
Mar 12 23:48:49.319: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ljf2" in namespace "subpath-2814" to be "Succeeded or Failed"
Mar 12 23:48:49.323: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136797ms
Mar 12 23:48:51.326: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 2.007235451s
Mar 12 23:48:53.355: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 4.036128549s
Mar 12 23:48:55.358: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 6.039219001s
Mar 12 23:48:57.385: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 8.066148043s
Mar 12 23:48:59.388: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 10.068967134s
Mar 12 23:49:01.391: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 12.072188893s
Mar 12 23:49:03.394: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 14.075630727s
Mar 12 23:49:05.398: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 16.079330586s
Mar 12 23:49:07.401: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 18.082441724s
Mar 12 23:49:09.404: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Running", Reason="", readiness=true. Elapsed: 20.085712296s
Mar 12 23:49:11.409: INFO: Pod "pod-subpath-test-configmap-ljf2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.090364775s
STEP: Saw pod success
Mar 12 23:49:11.409: INFO: Pod "pod-subpath-test-configmap-ljf2" satisfied condition "Succeeded or Failed"
Mar 12 23:49:11.411: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-ljf2 container test-container-subpath-configmap-ljf2:
STEP: delete the pod
Mar 12 23:49:11.450: INFO: Waiting for pod pod-subpath-test-configmap-ljf2 to disappear
Mar 12 23:49:11.463: INFO: Pod pod-subpath-test-configmap-ljf2 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-ljf2
Mar 12 23:49:11.463: INFO: Deleting pod "pod-subpath-test-configmap-ljf2" in namespace "subpath-2814"
[AfterEach] [sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:49:11.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2814" for this suite.
• [SLOW TEST:22.284 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
Atomic writer volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
should support subpaths with configmap pod [LinuxOnly] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":49,"skipped":784,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop
should call prestop when killing a pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:49:11.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-9583
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9583
STEP: Deleting pre-stop pod
Mar 12 23:49:20.580: INFO: Saw: {
"Hostname": "server",
"Sent": null,
"Received": {
"prestop": 1
},
"Errors": null,
"Log": [
"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
],
"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:49:20.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9583" for this suite.
• [SLOW TEST:9.139 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
should call prestop when killing a pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":50,"skipped":813,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI
should provide container's memory request [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:49:20.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 12 23:49:20.666: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a" in namespace "projected-8629" to be "Succeeded or Failed"
Mar 12 23:49:20.721: INFO: Pod "downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a": Phase="Pending", Reason="", readiness=false. Elapsed: 55.422973ms
Mar 12 23:49:22.725: INFO: Pod "downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059021312s
Mar 12 23:49:24.728: INFO: Pod "downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062838594s
STEP: Saw pod success
Mar 12 23:49:24.728: INFO: Pod "downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a" satisfied condition "Succeeded or Failed"
Mar 12 23:49:24.732: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a container client-container:
STEP: delete the pod
Mar 12 23:49:24.750: INFO: Waiting for pod downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a to disappear
Mar 12 23:49:24.755: INFO: Pod downwardapi-volume-02a3c423-a041-402b-a364-3c9df130d63a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:49:24.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8629" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":817,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe
should check if kubectl describe prints relevant information for rc and pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:49:24.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 12 23:49:24.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1902'
Mar 12 23:49:25.204: INFO: stderr: ""
Mar 12 23:49:25.205: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Mar 12 23:49:25.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1902'
Mar 12 23:49:25.451: INFO: stderr: ""
Mar 12 23:49:25.451: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Mar 12 23:49:26.455: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 12 23:49:26.455: INFO: Found 0 / 1
Mar 12 23:49:27.455: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 12 23:49:27.455: INFO: Found 1 / 1
Mar 12 23:49:27.455: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1
Mar 12 23:49:27.458: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 12 23:49:27.458: INFO: ForEach: Found 1 pods from the filter. Now looping through them.
Mar 12 23:49:27.458: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe pod agnhost-master-4ndl5 --namespace=kubectl-1902'
Mar 12 23:49:27.588: INFO: stderr: ""
Mar 12 23:49:27.588: INFO: stdout: "Name: agnhost-master-4ndl5\nNamespace: kubectl-1902\nPriority: 0\nNode: latest-worker/172.17.0.16\nStart Time: Thu, 12 Mar 2020 23:49:25 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.121\nIPs:\n IP: 10.244.1.121\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://d483ded9ddff467fdb61e930ef3bbf17c68878533a24eed36be126cf6e1f1ff3\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 12 Mar 2020 23:49:26 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-tsd66 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-tsd66:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-tsd66\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-1902/agnhost-master-4ndl5 to latest-worker\n Normal Pulled 2s kubelet, latest-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, latest-worker Created container agnhost-master\n Normal Started 1s kubelet, latest-worker Started container agnhost-master\n"
Mar 12 23:49:27.588: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1902'
Mar 12 23:49:27.710: INFO: stderr: ""
Mar 12 23:49:27.710: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1902\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-master-4ndl5\n"
Mar 12 23:49:27.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1902'
Mar 12 23:49:27.796: INFO: stderr: ""
Mar 12 23:49:27.796: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1902\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.28.188\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.121:6379\nSession Affinity: None\nEvents: \n"
Mar 12 23:49:27.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe node latest-control-plane'
Mar 12 23:49:27.893: INFO: stderr: ""
Mar 12 23:49:27.893: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 08 Mar 2020 14:49:22 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Thu, 12 Mar 2020 23:49:22 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 12 Mar 2020 23:47:45 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 12 Mar 2020 23:47:45 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 12 Mar 2020 23:47:45 +0000 Sun, 08 Mar 2020 14:49:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 12 Mar 2020 23:47:45 +0000 Sun, 08 Mar 2020 14:50:16 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.17\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131767112Ki\n pods: 110\nSystem Info:\n Machine ID: fb03af8223ea4430b6faaad8b31da5e5\n System UUID: 220fc748-c3b9-4de4-aa76-4a3520169f00\n Boot ID: 3de0b5b8-8b8f-48d3-9705-cabccc881bdb\n Kernel Version: 4.4.0-142-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (8 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-gxrvh 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 4d8h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d9h\n kube-system kindnet-gp8bt 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 4d8h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 4d9h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 4d9h\n kube-system kube-proxy-nxxmk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d8h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 4d9h\n local-path-storage local-path-provisioner-7745554f7f-52xw4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d8h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (4%) 100m (0%)\n memory 120Mi (0%) 220Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n"
Mar 12 23:49:27.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config describe namespace kubectl-1902'
Mar 12 23:49:27.974: INFO: stderr: ""
Mar 12 23:49:27.974: INFO: stdout: "Name: kubectl-1902\nLabels: e2e-framework=kubectl\n e2e-run=4114b614-3358-44c1-8546-4721f3a73760\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:49:27.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1902" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":52,"skipped":851,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services
should serve multiport endpoints from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:49:27.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-4586
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4586 to expose endpoints map[]
Mar 12 23:49:28.075: INFO: successfully validated that service multi-endpoint-test in namespace services-4586 exposes endpoints map[] (4.219643ms elapsed)
STEP: Creating pod pod1 in namespace services-4586
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4586 to expose endpoints map[pod1:[100]]
Mar 12 23:49:30.175: INFO: successfully validated that service multi-endpoint-test in namespace services-4586 exposes endpoints map[pod1:[100]] (2.085747826s elapsed)
STEP: Creating pod pod2 in namespace services-4586
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4586 to expose endpoints map[pod1:[100] pod2:[101]]
Mar 12 23:49:32.234: INFO: successfully validated that service multi-endpoint-test in namespace services-4586 exposes endpoints map[pod1:[100] pod2:[101]] (2.055591774s elapsed)
STEP: Deleting pod pod1 in namespace services-4586
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4586 to expose endpoints map[pod2:[101]]
Mar 12 23:49:33.273: INFO: successfully validated that service multi-endpoint-test in namespace services-4586 exposes endpoints map[pod2:[101]] (1.035364938s elapsed)
STEP: Deleting pod pod2 in namespace services-4586
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4586 to expose endpoints map[]
Mar 12 23:49:33.290: INFO: successfully validated that service multi-endpoint-test in namespace services-4586 exposes endpoints map[] (14.337544ms elapsed)
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:49:33.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4586" for this suite.
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
• [SLOW TEST:5.362 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should serve multiport endpoints from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":53,"skipped":873,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job
should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:49:33.342: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:49:41.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8653" for this suite.
• [SLOW TEST:8.069 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":54,"skipped":896,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers
should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:49:41.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Mar 12 23:49:41.490: INFO: Waiting up to 5m0s for pod "client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91" in namespace "containers-7675" to be "Succeeded or Failed"
Mar 12 23:49:41.495: INFO: Pod "client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233433ms
Mar 12 23:49:43.497: INFO: Pod "client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007092242s
Mar 12 23:49:45.500: INFO: Pod "client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009794584s
STEP: Saw pod success
Mar 12 23:49:45.500: INFO: Pod "client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91" satisfied condition "Succeeded or Failed"
Mar 12 23:49:45.502: INFO: Trying to get logs from node latest-worker pod client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91 container test-container:
STEP: delete the pod
Mar 12 23:49:45.520: INFO: Waiting for pod client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91 to disappear
Mar 12 23:49:45.525: INFO: Pod client-containers-b0cb7f93-6114-497f-9aea-46c64dc0ff91 no longer exists
[AfterEach] [k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:49:45.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7675" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":920,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume
should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:49:45.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Mar 12 23:49:45.586: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea" in namespace "downward-api-1992" to be "Succeeded or Failed"
Mar 12 23:49:45.622: INFO: Pod "downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea": Phase="Pending", Reason="", readiness=false. Elapsed: 35.278878ms
Mar 12 23:49:47.637: INFO: Pod "downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.050731991s
STEP: Saw pod success
Mar 12 23:49:47.637: INFO: Pod "downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea" satisfied condition "Succeeded or Failed"
Mar 12 23:49:47.639: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea container client-container:
STEP: delete the pod
Mar 12 23:49:47.672: INFO: Waiting for pod downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea to disappear
Mar 12 23:49:47.684: INFO: Pod downwardapi-volume-82132f3d-cc19-4608-a686-cb2c40ba40ea no longer exists
[AfterEach] [sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:49:47.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1992" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":56,"skipped":933,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance]
should not start app containers if init containers fail on a RestartAlways pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:49:47.690: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Mar 12 23:49:47.770: INFO: PodSpec: initContainers in spec.initContainers
Mar 12 23:50:36.644: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3023fc2c-180c-4a90-b03f-21b1d7fcc75c", GenerateName:"", Namespace:"init-container-3466", SelfLink:"/api/v1/namespaces/init-container-3466/pods/pod-init-3023fc2c-180c-4a90-b03f-21b1d7fcc75c", UID:"298b33f7-9229-4f5e-9d89-ea85d3bfa562", ResourceVersion:"1212310", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719653787, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"770536668"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-zjjjv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001a22080), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zjjjv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zjjjv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zjjjv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005468068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000a9c000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0054680f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005468110)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005468118), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00546811c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653787, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653787, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653787, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653787, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.16", PodIP:"10.244.1.129", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.129"}}, StartTime:(*v1.Time)(0xc002e46120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a9c0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a9c150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://3154e5de77e44a76a48c251a9289c9e2b9715bcffa65d337c122333888d672e6", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e46180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e46160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00546819f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:50:36.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3466" for this suite.
• [SLOW TEST:48.963 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
should not start app containers if init containers fail on a RestartAlways pod [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":57,"skipped":952,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
removes definition from spec when one version gets changed to not be served [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:50:36.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Mar 12 23:50:36.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:50:51.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5177" for this suite.
• [SLOW TEST:14.628 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
removes definition from spec when one version gets changed to not be served [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":58,"skipped":962,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment
RollingUpdateDeployment should delete old pods and create new ones [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:50:51.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 12 23:50:51.336: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Mar 12 23:50:51.380: INFO: Pod name sample-pod: Found 0 pods out of 1
Mar 12 23:50:56.383: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Mar 12 23:50:56.383: INFO: Creating deployment "test-rolling-update-deployment"
Mar 12 23:50:56.405: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Mar 12 23:50:56.428: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Mar 12 23:50:58.432: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Mar 12 23:50:58.434: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Mar 12 23:50:58.439: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4371 /apis/apps/v1/namespaces/deployment-4371/deployments/test-rolling-update-deployment 6ff85d66-d477-44ee-8e70-ee6b3f21ce99 1212449 1 2020-03-12 23:50:56 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003102a38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-12 23:50:56 +0000 UTC,LastTransitionTime:2020-03-12 23:50:56 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-03-12 23:50:57 +0000 UTC,LastTransitionTime:2020-03-12 23:50:56 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}
Mar 12 23:50:58.441: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-4371 /apis/apps/v1/namespaces/deployment-4371/replicasets/test-rolling-update-deployment-664dd8fc7f 0871a3fc-0a27-4298-aed5-9ce8032bb357 1212438 1 2020-03-12 23:50:56 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 6ff85d66-d477-44ee-8e70-ee6b3f21ce99 0xc003102ff7 0xc003102ff8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003103078 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Mar 12 23:50:58.441: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Mar 12 23:50:58.441: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4371 /apis/apps/v1/namespaces/deployment-4371/replicasets/test-rolling-update-controller 67619f22-5a4b-4fff-a039-fdb498dafb1c 1212447 2 2020-03-12 23:50:51 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 6ff85d66-d477-44ee-8e70-ee6b3f21ce99 0xc003102f07 0xc003102f08}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003102f78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Mar 12 23:50:58.443: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-8bt4m" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-8bt4m test-rolling-update-deployment-664dd8fc7f- deployment-4371 /api/v1/namespaces/deployment-4371/pods/test-rolling-update-deployment-664dd8fc7f-8bt4m 0d92ad4e-b12d-4b4f-b7af-d3ae08b76dd5 1212437 0 2020-03-12 23:50:56 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 0871a3fc-0a27-4298-aed5-9ce8032bb357 0xc003103547 0xc003103548}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4hxnn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4hxnn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4hxnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:50:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:50:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:50:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-12 23:50:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.16,PodIP:10.244.1.131,StartTime:2020-03-12 23:50:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-12 23:50:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://a9633f6a6f0fd2aad94ffa4539ac926f2629a57c7e2bfe1b20b1f2bc46154269,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.131,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:50:58.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4371" for this suite.
• [SLOW TEST:7.166 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
RollingUpdateDeployment should delete old pods and create new ones [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":59,"skipped":973,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Update Demo
should create and stop a replication controller [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:50:58.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Mar 12 23:50:58.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5503'
Mar 12 23:50:58.768: INFO: stderr: ""
Mar 12 23:50:58.768: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Mar 12 23:50:58.768: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5503'
Mar 12 23:50:58.855: INFO: stderr: ""
Mar 12 23:50:58.855: INFO: stdout: "update-demo-nautilus-lg7hx update-demo-nautilus-vv9nv "
Mar 12 23:50:58.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lg7hx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5503'
Mar 12 23:50:58.932: INFO: stderr: ""
Mar 12 23:50:58.932: INFO: stdout: ""
Mar 12 23:50:58.932: INFO: update-demo-nautilus-lg7hx is created but not running
Mar 12 23:51:03.932: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5503'
Mar 12 23:51:04.005: INFO: stderr: ""
Mar 12 23:51:04.005: INFO: stdout: "update-demo-nautilus-lg7hx update-demo-nautilus-vv9nv "
Mar 12 23:51:04.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lg7hx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5503'
Mar 12 23:51:04.068: INFO: stderr: ""
Mar 12 23:51:04.068: INFO: stdout: "true"
Mar 12 23:51:04.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lg7hx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5503'
Mar 12 23:51:04.132: INFO: stderr: ""
Mar 12 23:51:04.132: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 12 23:51:04.132: INFO: validating pod update-demo-nautilus-lg7hx
Mar 12 23:51:04.166: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 12 23:51:04.166: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 12 23:51:04.166: INFO: update-demo-nautilus-lg7hx is verified up and running
Mar 12 23:51:04.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vv9nv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5503'
Mar 12 23:51:04.231: INFO: stderr: ""
Mar 12 23:51:04.231: INFO: stdout: "true"
Mar 12 23:51:04.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vv9nv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5503'
Mar 12 23:51:04.296: INFO: stderr: ""
Mar 12 23:51:04.296: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Mar 12 23:51:04.296: INFO: validating pod update-demo-nautilus-vv9nv
Mar 12 23:51:04.299: INFO: got data: {
"image": "nautilus.jpg"
}
Mar 12 23:51:04.299: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Mar 12 23:51:04.299: INFO: update-demo-nautilus-vv9nv is verified up and running
STEP: using delete to clean up resources
Mar 12 23:51:04.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5503'
Mar 12 23:51:04.381: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 12 23:51:04.381: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Mar 12 23:51:04.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5503'
Mar 12 23:51:04.446: INFO: stderr: "No resources found in kubectl-5503 namespace.\n"
Mar 12 23:51:04.446: INFO: stdout: ""
Mar 12 23:51:04.446: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5503 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Mar 12 23:51:04.509: INFO: stderr: ""
Mar 12 23:51:04.509: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:04.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5503" for this suite.
• [SLOW TEST:6.066 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Update Demo
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
should create and stop a replication controller [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":60,"skipped":975,"failed":0}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial]
should run and stop complex daemon [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:04.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 12 23:51:04.590: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Mar 12 23:51:04.598: INFO: Number of nodes with available pods: 0
Mar 12 23:51:04.598: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Mar 12 23:51:04.645: INFO: Number of nodes with available pods: 0
Mar 12 23:51:04.645: INFO: Node latest-worker2 is running more than one daemon pod
Mar 12 23:51:05.648: INFO: Number of nodes with available pods: 0
Mar 12 23:51:05.648: INFO: Node latest-worker2 is running more than one daemon pod
Mar 12 23:51:06.649: INFO: Number of nodes with available pods: 1
Mar 12 23:51:06.649: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Mar 12 23:51:06.678: INFO: Number of nodes with available pods: 1
Mar 12 23:51:06.679: INFO: Number of running nodes: 0, number of available pods: 1
Mar 12 23:51:07.683: INFO: Number of nodes with available pods: 0
Mar 12 23:51:07.683: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Mar 12 23:51:07.695: INFO: Number of nodes with available pods: 0
Mar 12 23:51:07.695: INFO: Node latest-worker2 is running more than one daemon pod
Mar 12 23:51:08.700: INFO: Number of nodes with available pods: 0
Mar 12 23:51:08.700: INFO: Node latest-worker2 is running more than one daemon pod
Mar 12 23:51:09.698: INFO: Number of nodes with available pods: 0
Mar 12 23:51:09.698: INFO: Node latest-worker2 is running more than one daemon pod
Mar 12 23:51:10.699: INFO: Number of nodes with available pods: 0
Mar 12 23:51:10.699: INFO: Node latest-worker2 is running more than one daemon pod
Mar 12 23:51:11.700: INFO: Number of nodes with available pods: 0
Mar 12 23:51:11.700: INFO: Node latest-worker2 is running more than one daemon pod
Mar 12 23:51:12.710: INFO: Number of nodes with available pods: 0
Mar 12 23:51:12.710: INFO: Node latest-worker2 is running more than one daemon pod
Mar 12 23:51:13.699: INFO: Number of nodes with available pods: 0
Mar 12 23:51:13.699: INFO: Node latest-worker2 is running more than one daemon pod
Mar 12 23:51:14.699: INFO: Number of nodes with available pods: 1
Mar 12 23:51:14.699: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4537, will wait for the garbage collector to delete the pods
Mar 12 23:51:14.761: INFO: Deleting DaemonSet.extensions daemon-set took: 5.077949ms
Mar 12 23:51:15.061: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.272661ms
Mar 12 23:51:22.164: INFO: Number of nodes with available pods: 0
Mar 12 23:51:22.164: INFO: Number of running nodes: 0, number of available pods: 0
Mar 12 23:51:22.169: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4537/daemonsets","resourceVersion":"1212654"},"items":null}
Mar 12 23:51:22.171: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4537/pods","resourceVersion":"1212654"},"items":null}
[AfterEach] [sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:22.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4537" for this suite.
• [SLOW TEST:17.697 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
should run and stop complex daemon [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":61,"skipped":981,"failed":0}
SSSS
------------------------------
[sig-network] Services
should serve a basic endpoint from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:22.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-592
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-592 to expose endpoints map[]
Mar 12 23:51:22.310: INFO: Get endpoints failed (6.727828ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Mar 12 23:51:23.313: INFO: successfully validated that service endpoint-test2 in namespace services-592 exposes endpoints map[] (1.009733441s elapsed)
STEP: Creating pod pod1 in namespace services-592
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-592 to expose endpoints map[pod1:[80]]
Mar 12 23:51:25.359: INFO: successfully validated that service endpoint-test2 in namespace services-592 exposes endpoints map[pod1:[80]] (2.041292419s elapsed)
STEP: Creating pod pod2 in namespace services-592
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-592 to expose endpoints map[pod1:[80] pod2:[80]]
Mar 12 23:51:27.433: INFO: successfully validated that service endpoint-test2 in namespace services-592 exposes endpoints map[pod1:[80] pod2:[80]] (2.069914869s elapsed)
STEP: Deleting pod pod1 in namespace services-592
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-592 to expose endpoints map[pod2:[80]]
Mar 12 23:51:27.464: INFO: successfully validated that service endpoint-test2 in namespace services-592 exposes endpoints map[pod2:[80]] (28.35933ms elapsed)
STEP: Deleting pod pod2 in namespace services-592
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-592 to expose endpoints map[]
Mar 12 23:51:27.491: INFO: successfully validated that service endpoint-test2 in namespace services-592 exposes endpoints map[] (21.648012ms elapsed)
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:27.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-592" for this suite.
[AfterEach] [sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
• [SLOW TEST:5.313 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
should serve a basic endpoint from pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":62,"skipped":985,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial]
should patch a Namespace [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:27.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:27.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4342" for this suite.
STEP: Destroying namespace "nspatchtest-26467b84-4864-4bb9-a597-d262148428e0-2" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":63,"skipped":990,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
should include custom resource definition resources in discovery documents [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:27.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:27.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4820" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":64,"skipped":1004,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
should mutate configmap [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:27.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 12 23:51:28.453: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 12 23:51:31.485: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:31.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3985" for this suite.
STEP: Destroying namespace "webhook-3985-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":65,"skipped":1005,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose
should create services for rc [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:31.575: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Mar 12 23:51:31.630: INFO: namespace kubectl-9721
Mar 12 23:51:31.630: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9721'
Mar 12 23:51:31.854: INFO: stderr: ""
Mar 12 23:51:31.854: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Mar 12 23:51:32.890: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 12 23:51:32.890: INFO: Found 0 / 1
Mar 12 23:51:33.858: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 12 23:51:33.858: INFO: Found 1 / 1
Mar 12 23:51:33.858: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1
Mar 12 23:51:33.861: INFO: Selector matched 1 pods for map[app:agnhost]
Mar 12 23:51:33.861: INFO: ForEach: Found 1 pods from the filter. Now looping through them.
Mar 12 23:51:33.861: INFO: wait on agnhost-master startup in kubectl-9721
Mar 12 23:51:33.861: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config logs agnhost-master-wq6dw agnhost-master --namespace=kubectl-9721'
Mar 12 23:51:33.972: INFO: stderr: ""
Mar 12 23:51:33.972: INFO: stdout: "Paused\n"
STEP: exposing RC
Mar 12 23:51:33.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9721'
Mar 12 23:51:34.065: INFO: stderr: ""
Mar 12 23:51:34.065: INFO: stdout: "service/rm2 exposed\n"
Mar 12 23:51:34.079: INFO: Service rm2 in namespace kubectl-9721 found.
STEP: exposing service
Mar 12 23:51:36.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9721'
Mar 12 23:51:36.241: INFO: stderr: ""
Mar 12 23:51:36.241: INFO: stdout: "service/rm3 exposed\n"
Mar 12 23:51:36.266: INFO: Service rm3 in namespace kubectl-9721 found.
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:38.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9721" for this suite.
• [SLOW TEST:6.700 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Kubectl expose
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
should create services for rc [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":66,"skipped":1017,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts
should mount an API token into pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:38.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
Mar 12 23:51:40.867: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8127 pod-service-account-7eaa98f4-fd25-4577-8b5d-07a46ac6a21a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Mar 12 23:51:41.104: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8127 pod-service-account-7eaa98f4-fd25-4577-8b5d-07a46ac6a21a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Mar 12 23:51:41.279: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8127 pod-service-account-7eaa98f4-fd25-4577-8b5d-07a46ac6a21a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:41.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8127" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":67,"skipped":1055,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version
should check is all data is printed [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:41.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 12 23:51:41.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config version'
Mar 12 23:51:41.578: INFO: stderr: ""
Mar 12 23:51:41.579: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.749+55bb72b77444f7\", GitCommit:\"55bb72b77444f7279fb268652df377422792c9f0\", GitTreeState:\"clean\", BuildDate:\"2020-03-12T17:51:58Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:41.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1104" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":68,"skipped":1060,"failed":0}
------------------------------
[k8s.io] Variable Expansion
should allow substituting values in a container's args [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:41.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
Mar 12 23:51:41.664: INFO: Waiting up to 5m0s for pod "var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69" in namespace "var-expansion-5688" to be "Succeeded or Failed"
Mar 12 23:51:41.669: INFO: Pod "var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69": Phase="Pending", Reason="", readiness=false. Elapsed: 5.124049ms
Mar 12 23:51:43.672: INFO: Pod "var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008433878s
Mar 12 23:51:45.676: INFO: Pod "var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012321015s
STEP: Saw pod success
Mar 12 23:51:45.676: INFO: Pod "var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69" satisfied condition "Succeeded or Failed"
Mar 12 23:51:45.679: INFO: Trying to get logs from node latest-worker pod var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69 container dapi-container:
STEP: delete the pod
Mar 12 23:51:45.712: INFO: Waiting for pod var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69 to disappear
Mar 12 23:51:45.715: INFO: Pod var-expansion-3edf9ff8-11e9-4a0f-af47-63d9f2aaef69 no longer exists
[AfterEach] [k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:45.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5688" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1060,"failed":0}
SSSS
------------------------------
[sig-storage] HostPath
should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:45.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
Mar 12 23:51:45.795: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-76" to be "Succeeded or Failed"
Mar 12 23:51:45.798: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.57293ms
Mar 12 23:51:47.801: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006037136s
Mar 12 23:51:49.804: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009419917s
STEP: Saw pod success
Mar 12 23:51:49.804: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Mar 12 23:51:49.807: INFO: Trying to get logs from node latest-worker2 pod pod-host-path-test container test-container-1:
STEP: delete the pod
Mar 12 23:51:49.835: INFO: Waiting for pod pod-host-path-test to disappear
Mar 12 23:51:49.845: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:49.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-76" for this suite.
•{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1064,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector
should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:49.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0312 23:51:51.046342 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Mar 12 23:51:51.046: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:
[AfterEach] [sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:51.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-212" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":71,"skipped":1076,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes
should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:51.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Mar 12 23:51:51.188: INFO: Waiting up to 5m0s for pod "pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d" in namespace "emptydir-8295" to be "Succeeded or Failed"
Mar 12 23:51:51.193: INFO: Pod "pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.357484ms
Mar 12 23:51:53.196: INFO: Pod "pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007973387s
STEP: Saw pod success
Mar 12 23:51:53.196: INFO: Pod "pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d" satisfied condition "Succeeded or Failed"
Mar 12 23:51:53.199: INFO: Trying to get logs from node latest-worker pod pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d container test-container:
STEP: delete the pod
Mar 12 23:51:53.257: INFO: Waiting for pod pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d to disappear
Mar 12 23:51:53.263: INFO: Pod pod-63b3357d-3dcf-4612-8d2d-6553f725cd2d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:53.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8295" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1129,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion
should allow composing env vars into new env vars [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:53.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
Mar 12 23:51:53.330: INFO: Waiting up to 5m0s for pod "var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1" in namespace "var-expansion-1048" to be "Succeeded or Failed"
Mar 12 23:51:53.334: INFO: Pod "var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.402284ms
Mar 12 23:51:55.337: INFO: Pod "var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007266216s
STEP: Saw pod success
Mar 12 23:51:55.337: INFO: Pod "var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1" satisfied condition "Succeeded or Failed"
Mar 12 23:51:55.340: INFO: Trying to get logs from node latest-worker pod var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1 container dapi-container:
STEP: delete the pod
Mar 12 23:51:55.395: INFO: Waiting for pod var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1 to disappear
Mar 12 23:51:55.400: INFO: Pod var-expansion-7051b3b3-b649-4e67-a8e6-86265fce02d1 no longer exists
[AfterEach] [k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:51:55.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1048" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1161,"failed":0}
S
------------------------------
[sig-storage] Projected configMap
updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:51:55.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-422cfc61-0d34-4ee0-a81d-a4e804dc7f66
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-422cfc61-0d34-4ee0-a81d-a4e804dc7f66
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:53:08.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9198" for this suite.
• [SLOW TEST:72.785 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
updates should be reflected in volume [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":74,"skipped":1162,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
should mutate custom resource with different stored version [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:53:08.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Mar 12 23:53:08.827: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Mar 12 23:53:10.837: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653988, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653988, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653988, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719653988, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Mar 12 23:53:13.881: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 12 23:53:13.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2051-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:53:15.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-515" for this suite.
STEP: Destroying namespace "webhook-515-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
• [SLOW TEST:7.033 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
should mutate custom resource with different stored version [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":75,"skipped":1177,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap
should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:53:15.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-11393df6-e659-4d44-83b3-4b3cbb7ad85d
STEP: Creating a pod to test consume configMaps
Mar 12 23:53:15.298: INFO: Waiting up to 5m0s for pod "pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f" in namespace "configmap-6669" to be "Succeeded or Failed"
Mar 12 23:53:15.328: INFO: Pod "pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.67176ms
Mar 12 23:53:17.331: INFO: Pod "pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.033921172s
STEP: Saw pod success
Mar 12 23:53:17.332: INFO: Pod "pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f" satisfied condition "Succeeded or Failed"
Mar 12 23:53:17.334: INFO: Trying to get logs from node latest-worker pod pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f container configmap-volume-test:
STEP: delete the pod
Mar 12 23:53:17.362: INFO: Waiting for pod pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f to disappear
Mar 12 23:53:17.366: INFO: Pod pod-configmaps-e265cb53-3966-4491-a3ca-0a399615da8f no longer exists
[AfterEach] [sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:53:17.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6669" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1194,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application
should create and stop a working application [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:53:17.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Mar 12 23:53:17.462: INFO: apiVersion: v1
kind: Service
metadata:
name: agnhost-slave
labels:
app: agnhost
role: slave
tier: backend
spec:
ports:
- port: 6379
selector:
app: agnhost
role: slave
tier: backend
Mar 12 23:53:17.462: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-838'
Mar 12 23:53:17.788: INFO: stderr: ""
Mar 12 23:53:17.788: INFO: stdout: "service/agnhost-slave created\n"
Mar 12 23:53:17.788: INFO: apiVersion: v1
kind: Service
metadata:
name: agnhost-master
labels:
app: agnhost
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: agnhost
role: master
tier: backend
Mar 12 23:53:17.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-838'
Mar 12 23:53:18.048: INFO: stderr: ""
Mar 12 23:53:18.048: INFO: stdout: "service/agnhost-master created\n"
Mar 12 23:53:18.048: INFO: apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
ports:
- port: 80
selector:
app: guestbook
tier: frontend
Mar 12 23:53:18.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-838'
Mar 12 23:53:18.321: INFO: stderr: ""
Mar 12 23:53:18.321: INFO: stdout: "service/frontend created\n"
Mar 12 23:53:18.321: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels:
app: guestbook
tier: frontend
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: guestbook-frontend
image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
args: [ "guestbook", "--backend-port", "6379" ]
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
Mar 12 23:53:18.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-838'
Mar 12 23:53:18.527: INFO: stderr: ""
Mar 12 23:53:18.527: INFO: stdout: "deployment.apps/frontend created\n"
Mar 12 23:53:18.527: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
name: agnhost-master
spec:
replicas: 1
selector:
matchLabels:
app: agnhost
role: master
tier: backend
template:
metadata:
labels:
app: agnhost
role: master
tier: backend
spec:
containers:
- name: master
image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
args: [ "guestbook", "--http-port", "6379" ]
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
Mar 12 23:53:18.527: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-838'
Mar 12 23:53:18.848: INFO: stderr: ""
Mar 12 23:53:18.848: INFO: stdout: "deployment.apps/agnhost-master created\n"
Mar 12 23:53:18.849: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
name: agnhost-slave
spec:
replicas: 2
selector:
matchLabels:
app: agnhost
role: slave
tier: backend
template:
metadata:
labels:
app: agnhost
role: slave
tier: backend
spec:
containers:
- name: slave
image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
Mar 12 23:53:18.849: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-838'
Mar 12 23:53:19.187: INFO: stderr: ""
Mar 12 23:53:19.187: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Mar 12 23:53:19.187: INFO: Waiting for all frontend pods to be Running.
Mar 12 23:53:24.238: INFO: Waiting for frontend to serve content.
Mar 12 23:53:24.247: INFO: Trying to add a new entry to the guestbook.
Mar 12 23:53:24.257: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Mar 12 23:53:24.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-838'
Mar 12 23:53:24.388: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 12 23:53:24.388: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Mar 12 23:53:24.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-838'
Mar 12 23:53:24.520: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 12 23:53:24.520: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Mar 12 23:53:24.520: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-838'
Mar 12 23:53:24.634: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 12 23:53:24.634: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Mar 12 23:53:24.635: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-838'
Mar 12 23:53:24.705: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 12 23:53:24.705: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Mar 12 23:53:24.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-838'
Mar 12 23:53:24.796: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 12 23:53:24.796: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Mar 12 23:53:24.796: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-838'
Mar 12 23:53:24.891: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Mar 12 23:53:24.891: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Mar 12 23:53:24.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-838" for this suite.
• [SLOW TEST:7.552 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
Guestbook application
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
should create and stop a working application [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":77,"skipped":1216,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
works for CRD with validation schema [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Mar 12 23:53:24.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Mar 12 23:53:25.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Mar 12 23:53:27.904: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 create -f -'
Mar 12 23:53:29.953: INFO: stderr: ""
Mar 12 23:53:29.953: INFO: stdout: "e2e-test-crd-publish-openapi-8392-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Mar 12 23:53:29.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 delete e2e-test-crd-publish-openapi-8392-crds test-foo'
Mar 12 23:53:30.058: INFO: stderr: ""
Mar 12 23:53:30.058: INFO: stdout: "e2e-test-crd-publish-openapi-8392-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Mar 12 23:53:30.059: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 apply -f -'
Mar 12 23:53:30.330: INFO: stderr: ""
Mar 12 23:53:30.330: INFO: stdout: "e2e-test-crd-publish-openapi-8392-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Mar 12 23:53:30.330: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 delete e2e-test-crd-publish-openapi-8392-crds test-foo'
Mar 12 23:53:30.411: INFO: stderr: ""
Mar 12 23:53:30.411: INFO: stdout: "e2e-test-crd-publish-openapi-8392-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Mar 12 23:53:30.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 create -f -'
Mar 12 23:53:30.856: INFO: rc: 1
Mar 12 23:53:30.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 apply -f -'
Mar 12 23:53:31.098: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Mar 12 23:53:31.099: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 create -f -'
Mar 12 23:53:31.321: INFO: rc: 1
Mar 12 23:53:31.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8613 apply -f -'
Mar 12 23:53:31.513: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Mar 12 23:53:31.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32776 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8392-crds'
Mar 12 23:53:31.722: INFO: stderr: ""
Mar 12 23:53:31.722: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8392-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t